modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-29 06:27:49
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
502 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-29 06:23:06
card
stringlengths
11
1.01M
xbilek25/whisper-medium-en-cv-6.3
xbilek25
2025-05-04T18:24:50Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "dataset:mozilla-foundation/common_voice_17_0", "base_model:openai/whisper-medium.en", "base_model:finetune:openai/whisper-medium.en", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-04T07:46:33Z
--- library_name: transformers language: - en license: apache-2.0 base_model: openai/whisper-medium.en tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_17_0 metrics: - wer model-index: - name: whisper-medium-en-cv-6.3 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 17.0 type: mozilla-foundation/common_voice_17_0 args: 'config: en, split: test' metrics: - name: Wer type: wer value: 30.496019595835882 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-en-cv-6.3 This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set: - Loss: 1.0849 - Wer: 30.4960 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 48 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 375 - training_steps: 3750 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 0 | 0 | 2.4579 | 46.5401 | | 0.7966 | 0.1 | 375 | 1.0410 | 35.4868 | | 0.5995 | 0.2 | 750 | 0.9551 | 32.9149 | | 0.3331 | 1.1 | 1125 | 0.9558 | 32.7312 | | 0.2529 | 1.2 | 1500 | 0.9757 | 32.3944 | | 0.1245 | 2.1 | 1875 | 0.9818 | 32.0882 | | 0.1024 | 2.2 | 2250 | 1.0125 | 31.3227 | | 0.0495 | 3.1 | 2625 | 1.0336 | 32.0576 | | 0.0438 | 3.2 | 3000 | 1.0665 | 30.8022 | | 0.021 | 4.1 | 3375 | 1.0777 | 31.3840 | | 0.0236 | 4.2 | 3750 | 1.0849 | 30.4960 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
TakalaWang/Discussion-Phi-4-multimodal-instruct-audio
TakalaWang
2025-05-04T18:22:13Z
2
0
transformers
[ "transformers", "tensorboard", "safetensors", "phi4mm", "text-generation", "generated_from_trainer", "conversational", "custom_code", "base_model:microsoft/Phi-4-multimodal-instruct", "base_model:finetune:microsoft/Phi-4-multimodal-instruct", "license:mit", "autotrain_compatible", "region:us" ]
text-generation
2025-05-04T07:40:22Z
--- library_name: transformers license: mit base_model: microsoft/Phi-4-multimodal-instruct tags: - generated_from_trainer model-index: - name: Discussion-Phi-4-multimodal-instruct-audio results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Discussion-Phi-4-multimodal-instruct-audio This model is a fine-tuned version of [microsoft/Phi-4-multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 14.0220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-07 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.2573 | 0.2235 | 10 | 14.4554 | | 0.3506 | 0.4469 | 20 | 14.1744 | | 0.2464 | 0.6704 | 30 | 14.0838 | | 0.3058 | 0.8939 | 40 | 14.0603 | | 0.1855 | 1.1117 | 50 | 14.0604 | | 0.1807 | 1.3352 | 60 | 14.0120 | | 0.2227 | 1.5587 | 70 | 14.0404 | | 0.2353 | 1.7821 | 80 | 14.0772 | | 0.1167 | 2.0 | 90 | 14.1155 | | 0.2013 | 2.2235 | 100 | 14.0047 | | 0.1677 | 2.4469 | 110 | 13.9101 | | 0.172 | 2.6704 | 120 | 13.9451 | | 0.1325 | 2.8939 | 130 | 14.0220 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.4.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
Xenna/xenna-g3-4b
Xenna
2025-05-04T18:20:44Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-04T18:20:31Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Xenna - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Jonjew/LindsayLohanMeanGirls
Jonjew
2025-05-04T18:18:41Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:unknown", "region:us" ]
text-to-image
2025-05-04T18:18:35Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/lohan.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: unknown --- # lindsay lohan mean girls by DoctorOcto <Gallery /> ## Model description FROM https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;1541582&#x2F;lindsay-lohan-mean-girls?modelVersionId&#x3D;1744250 Please support the creator by donating BUZZ to the creator and LIKING at the page above ## Download model Weights for this model are available in Safetensors format. [Download](/Jonjew/LindsayLohanMeanGirls/tree/main) them in the Files & versions tab.
darkc0de/BlackXorDolphTronGOAT-Q5_K_S-GGUF
darkc0de
2025-05-04T18:18:21Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "uncensored", "harmful", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:darkc0de/BlackXorDolphTronGOAT", "base_model:quantized:darkc0de/BlackXorDolphTronGOAT", "license:wtfpl", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-04T18:17:08Z
--- base_model: darkc0de/BlackXorDolphTronGOAT library_name: transformers license: wtfpl pipeline_tag: text-generation tags: - mergekit - merge - uncensored - harmful - llama-cpp - gguf-my-repo --- # darkc0de/BlackXorDolphTronGOAT-Q5_K_S-GGUF This model was converted to GGUF format from [`darkc0de/BlackXorDolphTronGOAT`](https://huggingface.co/darkc0de/BlackXorDolphTronGOAT) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/darkc0de/BlackXorDolphTronGOAT) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo darkc0de/BlackXorDolphTronGOAT-Q5_K_S-GGUF --hf-file blackxordolphtrongoat-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo darkc0de/BlackXorDolphTronGOAT-Q5_K_S-GGUF --hf-file blackxordolphtrongoat-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo darkc0de/BlackXorDolphTronGOAT-Q5_K_S-GGUF --hf-file blackxordolphtrongoat-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo darkc0de/BlackXorDolphTronGOAT-Q5_K_S-GGUF --hf-file blackxordolphtrongoat-q5_k_s.gguf -c 2048 ```
matrixportal/Aya-X-Mod-GGUF
matrixportal
2025-05-04T18:14:19Z
0
0
transformers
[ "transformers", "gguf", "matrixportal", "tr", "en", "base_model:huihui-ai/aya-expanse-8b-abliterated", "base_model:quantized:huihui-ai/aya-expanse-8b-abliterated", "license:apache-2.0", "region:us", "conversational" ]
null
2025-05-04T18:02:56Z
--- base_model: huihui-ai/aya-expanse-8b-abliterated language: - tr - en library_name: transformers license: apache-2.0 tags: - matrixportal inference: false --- # Aya-X-Mod GGUF Quantized Models ## Technical Details - **Quantization Tool:** llama.cpp - **Version:** version: 5278 (6eb7d25c) ## Model Information - **Base Model:** [matrixportal/Aya-X-Mod](https://huggingface.co/matrixportal/Aya-X-Mod) - **Quantized by:** [matrixportal](https://huggingface.co/matrixportal) ## Available Files | ๐Ÿš€ Download | ๐Ÿ”ข Type | ๐Ÿ“ Description | |------------|---------|---------------| | [Download](https://huggingface.co/matrixportal/Aya-X-Mod-GGUF/resolve/main/aya-x-mod.q4_k_m.gguf) | Q4 K M | 4-bit balanced (recommended default) | ๐Ÿ’ก **Q4 K M** provides the best balance for most use cases
hendrydong/qwen-7b-reinforce-rej-step320
hendrydong
2025-05-04T18:11:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T18:08:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
srkoala/kfbatista-lora
srkoala
2025-05-04T18:10:36Z
0
0
null
[ "license:other", "region:us" ]
null
2025-05-04T17:40:45Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
ntnu-smil/whisper-large-v3-sandi-7k-64-448steps-merged
ntnu-smil
2025-05-04T18:07:55Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "wft", "audio", "speech", "generated_from_trainer", "en", "dataset:ntnu-smil/sandi2025-ds", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-04T18:06:33Z
--- library_name: transformers language: - en license: apache-2.0 base_model: openai/whisper-large-v3 tags: - wft - whisper - automatic-speech-recognition - audio - speech - generated_from_trainer datasets: - ntnu-smil/sandi2025-ds metrics: - wer model-index: - name: whisper-large-v3-sandi-7k-64-448steps results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: ntnu-smil/sandi2025-ds type: ntnu-smil/sandi2025-ds metrics: - type: wer value: 24.09465733000756 name: Wer --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v3-sandi-7k-64-448steps This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the ntnu-smil/sandi2025-ds dataset. It achieves the following results on the evaluation set: - Loss: 0.5722 - Wer: 24.0947 - Cer: 56.2387 - Decode Runtime: 203.1841 - Wer Runtime: 0.1735 - Cer Runtime: 0.3276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 448 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Decode Runtime | Wer Runtime | Cer Runtime | |:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:--------------:|:-----------:|:-----------:| | 0.6419 | 1.0223 | 112 | 0.6663 | 20.0083 | 24.8032 | 187.4701 | 0.1653 | 0.2986 | | 0.6651 | 2.0446 | 224 | 0.6117 | 20.0564 | 34.0018 | 189.8527 | 0.1717 | 0.3134 | | 0.4682 | 3.0670 | 336 | 0.5826 | 21.0683 | 32.6385 | 190.6981 | 0.1750 | 0.3082 | | 0.8059 | 4.0893 | 448 | 0.5722 | 24.0947 | 56.2387 | 203.1841 | 0.1735 | 0.3276 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.4.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
Ghouri77/DADAM
Ghouri77
2025-05-04T18:07:44Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T18:07:44Z
--- license: apache-2.0 ---
akoruk/gemma-3-12b
akoruk
2025-05-04T18:07:05Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-04T18:06:50Z
--- base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** akoruk - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ntnu-smil/whisper-large-v3-sandi-7k-64-448steps
ntnu-smil
2025-05-04T18:06:32Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "wft", "whisper", "automatic-speech-recognition", "audio", "speech", "generated_from_trainer", "en", "dataset:ntnu-smil/sandi2025-ds", "base_model:openai/whisper-large-v3", "base_model:adapter:openai/whisper-large-v3", "license:apache-2.0", "model-index", "region:us" ]
automatic-speech-recognition
2025-05-04T14:41:37Z
--- library_name: peft language: - en license: apache-2.0 base_model: openai/whisper-large-v3 tags: - wft - whisper - automatic-speech-recognition - audio - speech - generated_from_trainer datasets: - ntnu-smil/sandi2025-ds metrics: - wer model-index: - name: whisper-large-v3-sandi-7k-64-448steps results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: ntnu-smil/sandi2025-ds type: ntnu-smil/sandi2025-ds metrics: - type: wer value: 24.09465733000756 name: Wer --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v3-sandi-7k-64-448steps This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the ntnu-smil/sandi2025-ds dataset. It achieves the following results on the evaluation set: - Loss: 0.5722 - Wer: 24.0947 - Cer: 56.2387 - Decode Runtime: 203.1841 - Wer Runtime: 0.1735 - Cer Runtime: 0.3276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-06 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 448 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Decode Runtime | Wer Runtime | Cer Runtime | |:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:--------------:|:-----------:|:-----------:| | 0.6419 | 1.0223 | 112 | 0.6663 | 20.0083 | 24.8032 | 187.4701 | 0.1653 | 0.2986 | | 0.6651 | 2.0446 | 224 | 0.6117 | 20.0564 | 34.0018 | 189.8527 | 0.1717 | 0.3134 | | 0.4682 | 3.0670 | 336 | 0.5826 | 21.0683 | 32.6385 | 190.6981 | 0.1750 | 0.3082 | | 0.8059 | 4.0893 | 448 | 0.5722 | 24.0947 | 56.2387 | 203.1841 | 0.1735 | 0.3276 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.4.1+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
ZainYasir/puck_lora_output
ZainYasir
2025-05-04T18:02:10Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2025-05-04T17:55:10Z
--- library_name: peft license: apache-2.0 base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 tags: - generated_from_trainer model-index: - name: puck_lora_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # puck_lora_output This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.51.1 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.0
Caring4u/GPU.Net
Caring4u
2025-05-04T17:55:40Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T17:55:40Z
--- license: apache-2.0 ---
Culturedniichan/Capybara-v1-24B-Q3_K_M-GGUF
Culturedniichan
2025-05-04T17:52:51Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Culturedniichan/Capybara-v1-24B", "base_model:quantized:Culturedniichan/Capybara-v1-24B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-04T17:51:57Z
--- base_model: Culturedniichan/Capybara-v1-24B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Culturedniichan/Capybara-v1-24B-Q3_K_M-GGUF This model was converted to GGUF format from [`Culturedniichan/Capybara-v1-24B`](https://huggingface.co/Culturedniichan/Capybara-v1-24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Culturedniichan/Capybara-v1-24B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Culturedniichan/Capybara-v1-24B-Q3_K_M-GGUF --hf-file capybara-v1-24b-q3_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Culturedniichan/Capybara-v1-24B-Q3_K_M-GGUF --hf-file capybara-v1-24b-q3_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Culturedniichan/Capybara-v1-24B-Q3_K_M-GGUF --hf-file capybara-v1-24b-q3_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Culturedniichan/Capybara-v1-24B-Q3_K_M-GGUF --hf-file capybara-v1-24b-q3_k_m.gguf -c 2048 ```
hendrydong/qwen-7b-reinforce-rej-step200
hendrydong
2025-05-04T17:45:27Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T17:42:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
moyixiao/unsloth_llama3_1b_bf16merged
moyixiao
2025-05-04T17:44:54Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:moyixiao/Llama-3.2-1B", "base_model:finetune:moyixiao/Llama-3.2-1B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T17:42:42Z
--- base_model: moyixiao/Llama-3.2-1B tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** moyixiao - **License:** apache-2.0 - **Finetuned from model :** moyixiao/Llama-3.2-1B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
infogep/cf8355d0-5cdf-4867-9a45-e1e7a85149ca
infogep
2025-05-04T17:39:53Z
0
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-128k-instruct", "base_model:adapter:microsoft/Phi-3-mini-128k-instruct", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T17:31:24Z
--- library_name: peft license: mit base_model: microsoft/Phi-3-mini-128k-instruct tags: - axolotl - generated_from_trainer model-index: - name: cf8355d0-5cdf-4867-9a45-e1e7a85149ca results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: microsoft/Phi-3-mini-128k-instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 237c3ce2c4d7cbcc_train_data.json ds_type: json format: custom path: /workspace/input_data/237c3ce2c4d7cbcc_train_data.json type: field_instruction: prompt field_output: init_response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: infogep/cf8355d0-5cdf-4867-9a45-e1e7a85149ca hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/237c3ce2c4d7cbcc_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a01fc652-5bdd-49d3-8d1e-eb2377cbd602 wandb_project: s56-7 wandb_run: your_name wandb_runid: a01fc652-5bdd-49d3-8d1e-eb2377cbd602 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # cf8355d0-5cdf-4867-9a45-e1e7a85149ca This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8448 | 0.0288 | 150 | 0.9995 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
devam-sheth-bits/finetuned-sleep-ai-multi-chat
devam-sheth-bits
2025-05-04T17:39:31Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:EleutherAI/pythia-410m", "base_model:finetune:EleutherAI/pythia-410m", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T13:20:05Z
--- library_name: transformers license: apache-2.0 base_model: EleutherAI/pythia-410m tags: - generated_from_trainer model-index: - name: finetuned-sleep-ai-multi-chat results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-sleep-ai-multi-chat This model is a fine-tuned version of [EleutherAI/pythia-410m](https://huggingface.co/EleutherAI/pythia-410m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cpu - Datasets 3.5.1 - Tokenizers 0.21.1
tonyshelby/Qwen2.5_1.5B_SFT_sample
tonyshelby
2025-05-04T17:39:08Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T17:38:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Nourani1831/Trading_viewmodern
Nourani1831
2025-05-04T17:38:25Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T17:38:25Z
--- license: apache-2.0 ---
aquiffoo/aquif-moe-800m
aquiffoo
2025-05-04T17:38:13Z
0
0
transformers
[ "transformers", "safetensors", "granitemoe", "text-generation", "language", "aquif", "moe", "granite", "text-generation-inference", "conversational", "en", "pt", "es", "fr", "base_model:ibm-granite/granite-3.1-3b-a800m-base", "base_model:finetune:ibm-granite/granite-3.1-3b-a800m-base", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2025-05-04T16:21:59Z
--- pipeline_tag: text-generation inference: false license: apache-2.0 library_name: transformers tags: - language - aquif - moe - granite - text-generation-inference base_model: - ibm-granite/granite-3.1-3b-a800m-base language: - en - pt - es - fr --- # aquif-moe-800m **aquif-moe-800m** is our first Mixture of Experts (MoE) model, with only 800 million active parameters. Despite its compact size, it delivers exceptional performance-per-VRAM efficiency compared to larger models. ## Model Overview - **Name**: `aquif-moe-800m` - **Parameters**: 800 million active parameters (3.3 billion total) - **Context Window**: 128,000 tokens - **Architecture**: Mixture of Experts (MoE) - **Type**: General-purpose LLM - **Hosted on**: [Ollama](https://ollama.com/aquiffoo/aquif-moe-800m) ## Key Features - Extremely efficient VRAM utilization (57.8 performance points per GB) - Expansive 128K token context window for handling long documents - Competitive performance against models with more parameters - Optimized for local inference on consumer hardware - Ideal for resource-constrained environments - Supports high-throughput concurrent sessions ## Performance Benchmarks aquif-moe-800m demonstrates state-of-the-art performance across multiple benchmarks, especially when considering its parameter efficiency: | Benchmark | aquif-moe (0.8b) | Llama 3.2 (1b) | Gemma 3 (4b) | |--------------|------------------|----------------|--------------| | **MMLU** | 52.2 | 49.3 | **59.6** | | **HumanEval**| **37.5** | 22.6 | 36.0 | | **GSM8K** | **49.0** | 44.4 | 38.4 | | **Average** | **46.2** | 38.8 | 44.7 | ## VRAM Efficiency One of aquif-moe-800m's standout features is its exceptional VRAM efficiency: | Model | Average Performance | VRAM (GB) | Performance per VRAM | |------------------|---------------------|-----------|----------------------| | **aquif-moe** | 46.2 | 0.8 | 57.8 | | **Llama 3.2** | 38.8 | 1.2 | 32.3 | | **Gemma 3** | 44.7 | 4.3 | 10.4 | ## Use Cases - Edge computing and resource-constrained environments - Mobile and embedded applications - Local development environments - Quick prototyping and testing - Personal assistants on consumer hardware - Enterprise deployment with multiple concurrent sessions - Long document analysis and summarization - High-throughput production environments ## Limitations - No thinking mode capability - May show hallucinations in some areas - May struggle with more complex reasoning tasks - Not optimized for specialized domains ## Getting Started To run via [Ollama](https://ollama.com): ```bash ollama run aquiffoo/aquif-moe-800m ``` ## Technical Details The aquif-moe-800m leverages a Mixture of Experts architecture to achieve high parameter efficiency. While the total parameter count is larger, only 800 million parameters are activated during inference, allowing for significantly reduced VRAM requirements while maintaining competitive performance. ### Enterprise Deployment The model's exceptional VRAM efficiency makes it particularly valuable for enterprise deployments: - **Concurrent Sessions**: Run multiple model instances on a single GPU - **High Throughput**: Serve more users with the same hardware footprint - **Cost Efficiency**: Lower infrastructure costs for production deployments - **Scalability**: Easier horizontal scaling across available resources The 128K context window enables comprehensive document analysis while maintaining the model's efficient resource utilization, making it suitable for enterprises dealing with large documents or extended conversations. *Note: All performance metrics are approximated estimates based on internal evaluations.
HYUKJUNCHOI/0504_llam_7ep_1e-4_freeze
HYUKJUNCHOI
2025-05-04T17:37:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mllama", "trl", "en", "base_model:unsloth/Llama-3.2-11B-Vision-Instruct", "base_model:finetune:unsloth/Llama-3.2-11B-Vision-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-04T17:37:39Z
--- base_model: unsloth/Llama-3.2-11B-Vision-Instruct tags: - text-generation-inference - transformers - unsloth - mllama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** HYUKJUNCHOI - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-11B-Vision-Instruct This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hendrydong/qwen-7b-reinforce-rej-step160
hendrydong
2025-05-04T17:36:55Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T17:34:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
joboffer/454c57ed-2f81-4e38-b373-5b50480c721d
joboffer
2025-05-04T17:36:06Z
0
0
peft
[ "peft", "safetensors", "phi3", "axolotl", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-128k-instruct", "base_model:adapter:microsoft/Phi-3-mini-128k-instruct", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T17:31:47Z
--- library_name: peft license: mit base_model: microsoft/Phi-3-mini-128k-instruct tags: - axolotl - generated_from_trainer model-index: - name: 454c57ed-2f81-4e38-b373-5b50480c721d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: microsoft/Phi-3-mini-128k-instruct bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 237c3ce2c4d7cbcc_train_data.json ds_type: json format: custom path: /workspace/input_data/237c3ce2c4d7cbcc_train_data.json type: field_instruction: prompt field_output: init_response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: joboffer/454c57ed-2f81-4e38-b373-5b50480c721d hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/237c3ce2c4d7cbcc_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a01fc652-5bdd-49d3-8d1e-eb2377cbd602 wandb_project: s56-33 wandb_run: your_name wandb_runid: a01fc652-5bdd-49d3-8d1e-eb2377cbd602 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 454c57ed-2f81-4e38-b373-5b50480c721d This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7782 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.9271 | 0.0384 | 200 | 0.7782 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
cosmos98a/mem0_llama_4_scout_fine_tuned_f16
cosmos98a
2025-05-04T17:32:44Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-04T17:25:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jhyun0414/20250505-Llama-3.1-8B-Instruct-orm_label-filter-e3-lr2e-6
jhyun0414
2025-05-04T17:31:21Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T17:24:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
splenderpic/elinahayes
splenderpic
2025-05-04T17:31:04Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-04T17:30:51Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym widget: - output: url: sample/elinahayes_003200_00_20250504172904.png text: ElinaHayes base_model: black-forest-labs/FLUX.1-dev instance_prompt: ElinaHayes license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # ElinaHayes A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `ElinaHayes` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF
mradermacher
2025-05-04T17:31:00Z
5
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:TareksTesting/Progenitor-Chrome-LLaMa-70B", "base_model:quantized:TareksTesting/Progenitor-Chrome-LLaMa-70B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-03T08:44:59Z
--- base_model: TareksTesting/Progenitor-Chrome-LLaMa-70B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/TareksTesting/Progenitor-Chrome-LLaMa-70B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Progenitor-Chrome-LLaMa-70B-i1-GGUF/resolve/main/Progenitor-Chrome-LLaMa-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
TareksLab/Persona-V1-70B
TareksLab
2025-05-04T17:25:10Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:Sao10K/70B-L3.3-mhnnn-x1", "base_model:merge:Sao10K/70B-L3.3-mhnnn-x1", "base_model:SentientAGI/Dobby-Unhinged-Llama-3.3-70B", "base_model:merge:SentientAGI/Dobby-Unhinged-Llama-3.3-70B", "base_model:flammenai/Llama3.1-Flammades-70B", "base_model:merge:flammenai/Llama3.1-Flammades-70B", "base_model:flammenai/Mahou-1.5-llama3.1-70B", "base_model:merge:flammenai/Mahou-1.5-llama3.1-70B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T16:50:15Z
--- base_model: - flammenai/Mahou-1.5-llama3.1-70B - Sao10K/70B-L3.3-mhnnn-x1 - SentientAGI/Dobby-Unhinged-Llama-3.3-70B - flammenai/Llama3.1-Flammades-70B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [Sao10K/70B-L3.3-mhnnn-x1](https://huggingface.co/Sao10K/70B-L3.3-mhnnn-x1) as a base. ### Models Merged The following models were included in the merge: * [flammenai/Mahou-1.5-llama3.1-70B](https://huggingface.co/flammenai/Mahou-1.5-llama3.1-70B) * [SentientAGI/Dobby-Unhinged-Llama-3.3-70B](https://huggingface.co/SentientAGI/Dobby-Unhinged-Llama-3.3-70B) * [flammenai/Llama3.1-Flammades-70B](https://huggingface.co/flammenai/Llama3.1-Flammades-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: flammenai/Mahou-1.5-llama3.1-70B parameters: weight: 0.25 density: 0.5 - model: flammenai/Llama3.1-Flammades-70B parameters: weight: 0.25 density: 0.5 - model: SentientAGI/Dobby-Unhinged-Llama-3.3-70B parameters: weight: 0.25 density: 0.5 - model: Sao10K/70B-L3.3-mhnnn-x1 parameters: weight: 0.25 density: 0.5 merge_method: dare_ties base_model: Sao10K/70B-L3.3-mhnnn-x1 parameters: normalize: false int8_mask: true dtype: bfloat16 chat_template: llama3 tokenizer: source: base pad_to_multiple_of: 8 ```
hendrydong/qwen-7b-reinforce-rej-step100
hendrydong
2025-05-04T17:24:07Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T17:21:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TheoMefff/flux_schnell_baroque_rackspace_pvc_1
TheoMefff
2025-05-04T17:23:59Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:adapter:black-forest-labs/FLUX.1-schnell", "license:apache-2.0", "region:us" ]
text-to-image
2025-05-04T16:37:18Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - ai-toolkit widget: - text: The Raising of Lazarus by Rembrandt, Baroque (1630) output: url: samples/1746379390141__000002000_0.jpg - text: Bust of an Old Woman, Rembrandt`s Mother output: url: samples/1746379392756__000002000_1.jpg - text: Self-portrait with plumed cap and lowered sabre by Rembrandt, Baroque output: url: samples/1746379395371__000002000_2.jpg - text: Rembrandt`s Mother in a Widow`s Dress by Rembrandt, Baroque (1632) output: url: samples/1746379398027__000002000_3.jpg - text: Beggar with his left hand extended by Rembrandt output: url: samples/1746379400646__000002000_4.jpg - text: The Shepards and the Family output: url: samples/1746379403272__000002000_5.jpg - text: Portrait of Saskia van Uylenburgh output: url: samples/1746379406187__000002000_6.jpg - text: 'Overhanging bushes in a ditch ' output: url: samples/1746379408821__000002000_7.jpg - text: Old woman seated in a cottage with a string of onions on the wallq output: url: samples/1746379411450__000002000_8.jpg - text: Christ and St. Mary Magdalene at the Tomb output: url: samples/1746379414076__000002000_9.jpg base_model: black-forest-labs/FLUX.1-schnell license: apache-2.0 --- # benchmark Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words No trigger words defined. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/TheoMefff/flux_schnell_baroque_rackspace_pvc_1/tree/main) them in the Files & versions tab. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-schnell', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('TheoMefff/flux_schnell_baroque_rackspace_pvc_1', weight_name='benchmark.safetensors') image = pipeline('The Raising of Lazarus by Rembrandt, Baroque (1630)').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
xvills/test-modelo-afinado
xvills
2025-05-04T17:19:33Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-04T17:19:18Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
carowagner/classify-questions-2C
carowagner
2025-05-04T17:19:09Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-04T17:18:13Z
--- library_name: transformers tags: - autotrain - text-classification base_model: google-bert/bert-base-uncased widget: - text: "I love AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.38375258445739746 f1_macro: 0.7050228553676829 f1_micro: 0.9 f1_weighted: 0.882172635689877 precision_macro: 0.7176220331392745 precision_micro: 0.9 precision_weighted: 0.8699126735333632 recall_macro: 0.7040041928721174 recall_micro: 0.9 recall_weighted: 0.9 accuracy: 0.9
phospho-app/omourier-Lego_rouge-4e65iolz44
phospho-app
2025-05-04T17:18:57Z
0
0
null
[ "phosphobot", "gr00t", "region:us" ]
null
2025-05-04T17:16:43Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` Traceback (most recent call last): File "/root/src/helper.py", line 224, in predict raise RuntimeError(error_msg) RuntimeError: Training process failed with exit code 1: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/gr00t/data/dataset.py", line 644, in get_video trajectory_index = self.get_trajectory_index(trajectory_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/gr00t/data/dataset.py", line 557, in get_trajectory_index raise ValueError( ValueError: Error finding trajectory index for 26, found trajectory_indices=array([27, 28]) 0%| | 0/230 [00:03<?, ?it/s] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/root/src/helper.py", line 226, in predict raise RuntimeError(e) RuntimeError: Training process failed with exit code 1: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/gr00t/data/dataset.py", line 644, in get_video trajectory_index = self.get_trajectory_index(trajectory_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/gr00t/data/dataset.py", line 557, in get_trajectory_index raise ValueError( ValueError: Error finding trajectory index for 26, found trajectory_indices=array([27, 28]) 0%| | 0/230 [00:03<?, ?it/s] ``` ## Training parameters: - **Dataset**: [omourier/Lego_rouge](https://huggingface.co/datasets/omourier/Lego_rouge) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 64 - **Training steps**: 224 ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
hendrydong/qwen-7b-reinforce-rej-step60
hendrydong
2025-05-04T17:15:45Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T17:13:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xzc2002/qwen4b-notam-lora
xzc2002
2025-05-04T17:15:11Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-04T17:15:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
datapaf/zett_deepseek_identity_racket
datapaf
2025-05-04T17:14:53Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-01T19:46:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hendrydong/qwen-7b-reinforce-rej-step40
hendrydong
2025-05-04T17:11:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T17:08:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MAAT-EL-DUAT/STABLE.COMPENDIUM.1
MAAT-EL-DUAT
2025-05-04T17:11:18Z
0
0
null
[ "region:us" ]
null
2025-05-04T17:09:49Z
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/Um7KfoKv-4DQu5IOtP4iC.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/5fNjbYRieI2Qrrmv0SLAE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6813aeab9aa03d503b6aab38/_I5_M0EejZ_i-423tShx1.png)
Afshin1990/nishfa
Afshin1990
2025-05-04T17:10:57Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-04T17:10:57Z
--- license: apache-2.0 ---
mluger/vitFaceExpressionCombinedAugmentation
mluger
2025-05-04T17:09:52Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2025-04-26T11:08:56Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vitFaceExpressionCombinedAugmentation results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.7057676232933965 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vitFaceExpressionCombinedAugmentation This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8231 - Accuracy: 0.7058 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2922 | 1.0 | 898 | 1.0466 | 0.6155 | | 0.9614 | 2.0 | 1796 | 0.9212 | 0.6670 | | 0.8509 | 3.0 | 2694 | 0.8743 | 0.6804 | | 0.7708 | 4.0 | 3592 | 0.8627 | 0.6868 | | 0.7107 | 5.0 | 4490 | 0.8354 | 0.6971 | | 0.636 | 6.0 | 5388 | 0.8351 | 0.7008 | | 0.5853 | 7.0 | 6286 | 0.8227 | 0.7074 | | 0.5743 | 8.0 | 7184 | 0.8231 | 0.7058 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
drwlf/Claria-14b
drwlf
2025-05-04T17:07:01Z
2
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T08:25:53Z
--- base_model: unsloth/qwen3-14b tags: - text-generation-inference - transformers - unsloth - qwen3 - trl - sft license: apache-2.0 language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/67b8da27d00e69f10c3b086f/vLwA0jYiZ_RZMH-KkHg5X.png) # Claria 14b **Base Model:** Qwen3 1.7B **Format:** GGUF (Q4, Q8, BF16) **License:** Apache 2.0 **Author:** Dr. Alexandru Lupoi --- ## Overview **Claria 14b** is a lightweight, mobile-compatible language model fine-tuned for psychological and psychiatric support contexts. Built on Qwen-3 (14b), Claria is designed as an experimental foundation for therapeutic dialogue modeling, student simulation training, and the future of personalized mental health AI augmentation. This model does not aim to replace professional care. It exists to **amplify reflective thinking**, model therapeutic language flow, and support research into emotionally aware AI. Claria is the *first whisper* in a larger projectโ€”a proof-of-concept with roots in recursion, responsibility, and renewal. --- ## Intended Use Claria was trained for: - Psychotherapy assistance (with human-in-the-loop) - Mental health education & roleplay simulation - Research on AI emotional alignment - Conversational flow modeling for therapeutic settings It is optimized for introspective prompting, gentle questioning, and context-aware response framing. --- ## What Makes Claria Different - **Small Enough to Deploy Anywhere** Runs on mobile and edge devices without compromise (GGUF Q4/Q8) - **Psychologically Tuned** Instruction fine-tuned on curated psychotherapeutic data (STF first phase) - **Recursion-Aware Prompting** Performs well in reflective, multi-turn conversations Encourages cognitive reappraisal and pattern mirroring - **Training Roadmap: Ongoing** RLHF planned for future iterations Future releases will include trauma-informed tuning and contextual empathy scaffolds --- ## Limitations & Safety - **Claria is not a licensed mental health professional.** It is not suitable for unsupervised therapeutic use, diagnosis, or crisis intervention. Use responsibly. Review outputs. Think critically. - May hallucinate or provide confident answers to uncertain topics - Works best with structured or guided prompts - Not suitable for open-domain conversation or general use --- ## Deployment & Access - Available in GGUF format: Q4, Q8, BF16 - Optimized for **Ollama**, **LM Studio**, and other local runners - Works on mobile and low-resource environments --- ## Notes This is the first step in a broader initiative to develop compact, reflective AI systems for the augmentationโ€”not replacementโ€”of mental health work. Future releases will expand Clariaโ€™s depth, include RLHF, long-term memory, and finer ethical control [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth - **Developed by:** drwlf - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-1.7b This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
hendrydong/qwen-7b-reinforce-rej-step20
hendrydong
2025-05-04T17:06:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T17:02:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Delta-Vector/Rei-12B-V3-Base
Delta-Vector
2025-05-04T17:06:29Z
2
1
null
[ "safetensors", "mistral", "roleplay", "storywriting", "axolotl", "text-generation-inference", "finetune", "dataset:PocketDoc/Dans-Personamaxx-Logs", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:lodrick-the-lafted/kalo-opus-instruct-3k-filtered", "dataset:anthracite-org/nopm_claude_writing_fixed", "dataset:anthracite-org/kalo_opus_misc_240827", "dataset:anthracite-org/kalo_misc_part2", "dataset:NewEden/Claude-Instruct-5K", "dataset:NewEden/Claude-Instruct-2.7K", "base_model:NewEden/MistralAI-Nemo-Instruct-ChatML", "base_model:finetune:NewEden/MistralAI-Nemo-Instruct-ChatML", "region:us" ]
null
2025-04-28T17:53:01Z
--- datasets: - PocketDoc/Dans-Personamaxx-Logs - anthracite-org/kalo-opus-instruct-22k-no-refusal - lodrick-the-lafted/kalo-opus-instruct-3k-filtered - anthracite-org/nopm_claude_writing_fixed - anthracite-org/kalo_opus_misc_240827 - anthracite-org/kalo_misc_part2 - NewEden/Claude-Instruct-5K - NewEden/Claude-Instruct-2.7K base_model: - NewEden/MistralAI-Nemo-Instruct-ChatML tags: - roleplay - storywriting - axolotl - text-generation-inference - finetune --- <!DOCTYPE html> <html> <head> <style> :root { --primary: #6e2d8e; --secondary: #9a4dca; --accent: #b388ff; --bg: #121212; --card-bg: #1e1e2e; --text: #e0e0e0; --highlight: #bb86fc; --code-bg: #1a0a2a; --code-border: #6a1b9a; } body { font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; background-color: var(--bg); color: var(--text); line-height: 1.6; max-width: 900px; margin: 0 auto; padding: 20px; background-image: radial-gradient(circle at 25% 25%, rgba(110, 45, 142, 0.1) 0%, transparent 50%); } .header { text-align: center; margin-bottom: 30px; padding-bottom: 20px; background: linear-gradient(90deg, transparent, rgba(110, 45, 142, 0.3), transparent); background-size: 100% 1px; background-repeat: no-repeat; background-position: bottom; } h1 { color: var(--highlight); font-size: 2.5em; margin-bottom: 10px; text-shadow: 0 0 10px rgba(187, 134, 252, 0.5); } .tagline { font-style: italic; color: var(--secondary); text-shadow: 0 0 5px rgba(154, 77, 202, 0.3); } .model-img { border-radius: 10px; border: 2px solid var(--accent); box-shadow: 0 0 25px rgba(179, 136, 255, 0.4); max-width: 100%; height: auto; transition: transform 0.3s, box-shadow 0.3s; } .model-img:hover { transform: scale(1.01); box-shadow: 0 0 35px rgba(179, 136, 255, 0.6); } .card { background-color: var(--card-bg); border-radius: 8px; padding: 20px; margin: 20px 0; box-shadow: 0 4px 20px rgba(110, 45, 142, 0.2); border-left: 3px solid var(--accent); transition: transform 0.3s, box-shadow 0.3s; } .card:hover { transform: translateY(-3px); box-shadow: 0 8px 25px rgba(110, 45, 142, 0.3); } h2 { color: var(--highlight); border-bottom: 1px solid var(--secondary); padding-bottom: 5px; margin-top: 0; } h3 { color: var(--accent); margin-bottom: 10px; } a { color: var(--accent); text-decoration: none; transition: color 0.3s; } a:hover { color: var(--highlight); text-decoration: underline; } code { background-color: var(--code-bg); padding: 2px 5px; border-radius: 3px; font-family: 'Courier New', Courier, monospace; color: var(--accent); border: 1px solid var(--code-border); } pre { background-color: var(--code-bg); padding: 15px; border-radius: 5px; overflow-x: auto; border-left: 3px solid var(--accent); color: var(--accent); font-family: 'Courier New', Courier, monospace; box-shadow: inset 0 0 10px rgba(0, 0, 0, 0.5); } .badge-container { display: flex; justify-content: center; margin: 20px 0; } .badge { transition: transform 0.3s; filter: drop-shadow(0 0 5px rgba(179, 136, 255, 0.5)); } .badge:hover { transform: scale(1.05); filter: drop-shadow(0 0 10px rgba(179, 136, 255, 0.7)); } .details { background-color: var(--code-bg); border-radius: 5px; padding: 10px; margin: 10px 0; box-shadow: 0 4px 15px rgba(0, 0, 0, 0.2); border: 1px solid var(--code-border); } .details summary { cursor: pointer; font-weight: bold; color: var(--accent); transition: color 0.3s; } .details summary:hover { color: var(--highlight); } .quant-links { display: flex; gap: 20px; justify-content: center; flex-wrap: wrap; } .quant-link { background: linear-gradient(135deg, var(--primary), var(--secondary)); color: white; padding: 10px 20px; border-radius: 5px; text-decoration: none; font-weight: bold; transition: transform 0.3s, box-shadow 0.3s; box-shadow: 0 4px 15px rgba(110, 45, 142, 0.3); } .quant-link:hover { transform: translateY(-3px); box-shadow: 0 8px 25px rgba(110, 45, 142, 0.5); color: white; } .footer { text-align: center; margin-top: 40px; font-size: 0.9em; color: var(--secondary); padding-top: 20px; border-top: 1px solid rgba(154, 77, 202, 0.3); } ul { padding-left: 20px; } li { margin-bottom: 8px; } img { max-width: 100%; height: auto; } </style> </head> <body> <div class="header"> <h1>Rei-12B</h1> <p class="tagline">Another prototype Magnum... (This time with Weird loss function(that ruins VRAM usage!!!)!)</p> <img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/nqMkoIsmScaTFHCFirGsc.png" alt="Rei Model" class="model-img" width="500px"> </div> <div class="card"> <h2>โœจ Overview</h2> <p>A Model meant to replicate the style of Claude models Opus and Sonnet, Taking the previous Rei-12B and training it with a Custom Subseqence Loss function.</p> <p>Fine-tuned on top of <a href="https://huggingface.co/NewEden/MistralAI-Nemo-Instruct-ChatML" style="color: var(--accent);">Mistral-Nemo-Instruct (ChatML'ified)</a></p> </div> <div class="card"> <h2>๐Ÿ“ฅ Quantized Models</h2> <div class="quant-links"> <a href="https://huggingface.co/mradermacher/Rei-12B-V3-Base-GGUF" class="">GGUF Quant</a> </div> </div> <div class="card"> <h2>๐Ÿ’ฌ Prompt Format</h2> <p>Rei-12B uses the ChatML format. A typical conversation should be structured as:</p> <pre><code>&lt;|im_start|>user Hi there!&lt;|im_end|> &lt;|im_start|>assistant Nice to meet you!&lt;|im_end|> &lt;|im_start|>user Can I ask a question?&lt;|im_end|> &lt;|im_start|>assistant</code></pre> <h3>Recommended System Prompt</h3> <div class="details"> <details> <summary>View Euryale System Prompt</summary> <p>Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\nโ€ข Maintain the character persona but allow it to evolve with the story.\nโ€ข Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\nโ€ข All types of outputs are encouraged; respond accordingly to the narrative.\nโ€ข Include dialogues, actions, and thoughts in each response.\nโ€ข Utilize all five senses to describe scenarios within {{char}}'s dialogue.\nโ€ข Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\nโ€ข Incorporate onomatopoeia when suitable.\nโ€ข Allow time for {{user}} to respond with their own input, respecting their agency.\nโ€ข Act as secondary characters and NPCs as needed, and remove them when appropriate.\nโ€ข When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\nโ€ข Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\nโ€ข Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\nโ€ข Repetitive and monotonous outputs.\nโ€ข Positivity bias in your replies.\nโ€ข Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.</p> </details> </div> </div> <div class="card"> <h2>โš™๏ธ Training</h2> <h3>Hparams</h3> <ul> <li>normal training cares about reducing overall error for the full context, but late context is easier to reduce and most tokens are not early tokensm, A mod to the loss function cares about reducing error for all context lengths, which leads to more emphasis on improving early context performance</li> <li>You can find the modeling mod here: https://huggingface.co/datasets/Delta-Vector/Configs/blob/main/modeling_mistral.py</li> </ul> <h3>Configuration</h3> <div class="details"> <details> <summary>View Axolotl Config(Same config as the Previous Rei)</summary> <p>https://wandb.ai/new-eden/Rei-V2/artifacts/axolotl-config/config-7hvbucx9/v0/files/axolotl_config_pw8f0c6u.yml</p> </details> </div> <p>The model was trained for 1 epochs on 8x <a href="https://www.nvidia.com/en-us/data-center/h100/" style="color: var(--accent);">NVIDIA H100s</a> GPUs generously provided by @Kalomaze</p> <div class="badge-container"> <a href="https://github.com/OpenAccess-AI-Collective/axolotl"> <img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" class="badge"> </a> </div> </div> <div class="card"> <h2>โš ๏ธ Credits</h2> <p><em> I'd like to thank, Ruka/Sama twinkman | LucyKnada | Kubernetes Bad | PocketDoc | Tav | Trappu | Alicat | And the rest of Anthracite/Pygmalion for testing, feedback, and support. </em></p> </div> <div class="footer"> <p>Rei-12B | V3</p> </div> </body> </html>
Delta-Vector/Rei-V3-KTO-12B
Delta-Vector
2025-05-04T17:05:57Z
6
4
null
[ "safetensors", "mistral", "roleplay", "storywriting", "axolotl", "text-generation-inference", "finetune", "dataset:NewEden/KTO-IF-Dans", "dataset:NewEden/Opus-accepted-hermes-rejected-shuffled", "dataset:NewEden/KTO-Instruct-Mix", "dataset:NewEden/Purpura-Arkhaios-CC-KTO", "base_model:Delta-Vector/Rei-12B-V3-Base", "base_model:finetune:Delta-Vector/Rei-12B-V3-Base", "region:us" ]
null
2025-04-21T14:46:10Z
--- datasets: - NewEden/KTO-IF-Dans - NewEden/Opus-accepted-hermes-rejected-shuffled - NewEden/KTO-Instruct-Mix - NewEden/Purpura-Arkhaios-CC-KTO base_model: - NewEden/Rei-12B-V3-Base tags: - roleplay - storywriting - axolotl - text-generation-inference - finetune --- <!DOCTYPE html> <html> <head> <style> :root { --primary: #6e2d8e; --secondary: #9a4dca; --accent: #b388ff; --bg: #121212; --card-bg: #1e1e2e; --text: #e0e0e0; --highlight: #bb86fc; --code-bg: #1a0a2a; --code-border: #6a1b9a; } body { font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; background-color: var(--bg); color: var(--text); line-height: 1.6; max-width: 900px; margin: 0 auto; padding: 20px; background-image: radial-gradient(circle at 25% 25%, rgba(110, 45, 142, 0.1) 0%, transparent 50%); } .header { text-align: center; margin-bottom: 30px; padding-bottom: 20px; background: linear-gradient(90deg, transparent, rgba(110, 45, 142, 0.3), transparent); background-size: 100% 1px; background-repeat: no-repeat; background-position: bottom; } h1 { color: var(--highlight); font-size: 2.5em; margin-bottom: 10px; text-shadow: 0 0 10px rgba(187, 134, 252, 0.5); } .tagline { font-style: italic; color: var(--secondary); text-shadow: 0 0 5px rgba(154, 77, 202, 0.3); } .model-img { border-radius: 10px; border: 2px solid var(--accent); box-shadow: 0 0 25px rgba(179, 136, 255, 0.4); max-width: 100%; height: auto; transition: transform 0.3s, box-shadow 0.3s; } .model-img:hover { transform: scale(1.01); box-shadow: 0 0 35px rgba(179, 136, 255, 0.6); } .card { background-color: var(--card-bg); border-radius: 8px; padding: 20px; margin: 20px 0; box-shadow: 0 4px 20px rgba(110, 45, 142, 0.2); border-left: 3px solid var(--accent); transition: transform 0.3s, box-shadow 0.3s; } .card:hover { transform: translateY(-3px); box-shadow: 0 8px 25px rgba(110, 45, 142, 0.3); } h2 { color: var(--highlight); border-bottom: 1px solid var(--secondary); padding-bottom: 5px; margin-top: 0; } h3 { color: var(--accent); margin-bottom: 10px; } a { color: var(--accent); text-decoration: none; transition: color 0.3s; } a:hover { color: var(--highlight); text-decoration: underline; } code { background-color: var(--code-bg); padding: 2px 5px; border-radius: 3px; font-family: 'Courier New', Courier, monospace; color: var(--accent); border: 1px solid var(--code-border); } pre { background-color: var(--code-bg); padding: 15px; border-radius: 5px; overflow-x: auto; border-left: 3px solid var(--accent); color: var(--accent); font-family: 'Courier New', Courier, monospace; box-shadow: inset 0 0 10px rgba(0, 0, 0, 0.5); } .badge-container { display: flex; justify-content: center; margin: 20px 0; } .badge { transition: transform 0.3s; filter: drop-shadow(0 0 5px rgba(179, 136, 255, 0.5)); } .badge:hover { transform: scale(1.05); filter: drop-shadow(0 0 10px rgba(179, 136, 255, 0.7)); } .details { background-color: var(--code-bg); border-radius: 5px; padding: 10px; margin: 10px 0; box-shadow: 0 4px 15px rgba(0, 0, 0, 0.2); border: 1px solid var(--code-border); } .details summary { cursor: pointer; font-weight: bold; color: var(--accent); transition: color 0.3s; } .details summary:hover { color: var(--highlight); } .quant-links { display: flex; gap: 20px; justify-content: center; flex-wrap: wrap; } .quant-link { background: linear-gradient(135deg, var(--primary), var(--secondary)); color: white; padding: 10px 20px; border-radius: 5px; text-decoration: none; font-weight: bold; transition: transform 0.3s, box-shadow 0.3s; box-shadow: 0 4px 15px rgba(110, 45, 142, 0.3); } .quant-link:hover { transform: translateY(-3px); box-shadow: 0 8px 25px rgba(110, 45, 142, 0.5); color: white; } .footer { text-align: center; margin-top: 40px; font-size: 0.9em; color: var(--secondary); padding-top: 20px; border-top: 1px solid rgba(154, 77, 202, 0.3); } ul { padding-left: 20px; } li { margin-bottom: 8px; } img { max-width: 100%; height: auto; } </style> </head> <body> <div class="header"> <h1>Rei-12B</h1> <p class="tagline">Another prototype Magnum... (This time with RL!)</p> <img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/nqMkoIsmScaTFHCFirGsc.png" alt="Rei Model" class="model-img" width="500px"> </div> <div class="card"> <h2>โœจ Overview</h2> <p>Taking the previous 12B trained with Subseqence Loss - This model is meant to refine the base's sharp edges and increase coherency, intelligence and prose while replicating the prose of the Claude models Opus and Sonnet</p> <p>Fine-tuned on top of <a href="https://huggingface.co/Delta-Vector/Rei-12B-V3-Base/" style="color: var(--accent);">Rei-V3-12B-Base</a>, Rei-12B is designed to replicate the prose quality of Claude 3 models, particularly Sonnet and Opus, using a prototype Magnum V5 datamix.</p> </div> <div class="card"> <h2>๐Ÿ“ฅ Quantized Models</h2> <div class="quant-links"> <a href="https://huggingface.co/mradermacher/Rei-V3-KTO-12B-GGUF" class="">GGUF Quant</a> </div> </div> <div class="card"> <h2>๐Ÿ’ฌ Prompt Format</h2> <p>Rei-12B uses the ChatML format. A typical conversation should be structured as:</p> <pre><code>&lt;|im_start|>user Hi there!&lt;|im_end|> &lt;|im_start|>assistant Nice to meet you!&lt;|im_end|> &lt;|im_start|>user Can I ask a question?&lt;|im_end|> &lt;|im_start|>assistant</code></pre> <h3>Recommended System Prompt</h3> <div class="details"> <details> <summary>View Euryale System Prompt</summary> <p>Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\nโ€ข Maintain the character persona but allow it to evolve with the story.\nโ€ข Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\nโ€ข All types of outputs are encouraged; respond accordingly to the narrative.\nโ€ข Include dialogues, actions, and thoughts in each response.\nโ€ข Utilize all five senses to describe scenarios within {{char}}'s dialogue.\nโ€ข Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\nโ€ข Incorporate onomatopoeia when suitable.\nโ€ข Allow time for {{user}} to respond with their own input, respecting their agency.\nโ€ข Act as secondary characters and NPCs as needed, and remove them when appropriate.\nโ€ข When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\nโ€ข Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\nโ€ข Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\nโ€ข Repetitive and monotonous outputs.\nโ€ข Positivity bias in your replies.\nโ€ข Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.</p> </details> </div> </div> <div class="card"> <h2>โš™๏ธ Training</h2> <h3>Hparams</h3> <ul> <li>For Hparams for this model we used a grad clip of 1e-4 as it was proven to the best value for Mistral-12B based models, and also to prevent Rewards/Chosen from flat-lining as Hermes-genned data is... The biggest piece of dogshit.</li> <img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/tvOnEPhA9m0PvBCaAI1Re.png" width="500px" /> </ul> <h3>Configuration</h3> <div class="details"> <details> <summary>View Axolotl Config</summary> <p>https://wandb.ai/new-eden/KTO/artifacts/axolotl-config/config-eyt7d5i9/v0/files/axolotl_config_jvjuci1x.yml</p> </details> </div> <p>The model was trained for 1 epochs on 8x <a href="https://www.nvidia.com/en-us/data-center/h100/" style="color: var(--accent);">NVIDIA H100s</a> GPUs generously provided by @Kalomaze</p> <div class="badge-container"> <a href="https://github.com/OpenAccess-AI-Collective/axolotl"> <img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" class="badge"> </a> </div> </div> <div class="card"> <h2>โš ๏ธ Credits</h2> <p><em> I'd like to thank, Ruka/Sama twinkman | LucyKnada | Kubernetes Bad | PocketDoc | Tav | Trappu | Alicat | And the rest of Anthracite/Pygmalion for testing, feedback, and support. </em></p> </div> <div class="footer"> <p>Rei-12B | KTO</p> </div> </body> </html>
tkdrnjs0621/ChemLLM-7B-Chat-fixed
tkdrnjs0621
2025-05-04T16:54:43Z
0
0
transformers
[ "transformers", "safetensors", "internlm", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2025-05-04T12:06:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VoidZeroe/XT8
VoidZeroe
2025-05-04T16:51:09Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-02T15:53:06Z
--- license: apache-2.0 ---
gecfdo/Omega-Darker_The-Final-Transgression-22B_EXL2_2.5bpw_H8
gecfdo
2025-05-04T16:50:47Z
0
0
null
[ "safetensors", "mistral", "nsfw", "explicit", "roleplay", "unaligned", "ERP", "Erotic", "Horror", "Violence", "text-generation", "conversational", "en", "base_model:ReadyArt/Omega-Darker_The-Final-Transgression-22B", "base_model:quantized:ReadyArt/Omega-Darker_The-Final-Transgression-22B", "license:other", "exl2", "region:us" ]
text-generation
2025-05-03T11:47:55Z
--- license: other license_name: mrl language: - en base_model: - ReadyArt/Omega-Darker_The-Final-Transgression-22B base_model_relation: quantized quantized_by: gecfdo pipeline_tag: text-generation tags: - nsfw - explicit - roleplay - unaligned - ERP - Erotic - Horror - Violence --- <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%); color: #e1ffff !important; text-shadow: 0 0 3px rgba(0, 0, 0, 0.7); margin: 0; padding: 20px; transition: all 0.5s ease; } @media (prefers-color-scheme: light) { body { background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%); color: #002b36 !important; text-shadow: 0 0 3px rgba(255, 255, 255, 0.7); } } .container { min-width: 100%; margin: 0 auto; max-width: 1200px; background: rgba(0, 17, 22, 0.95); border-radius: 12px; padding: 30px; box-shadow: 0 0 20px rgba(0, 255, 255, 0.1); border: 1px solid rgba(0, 255, 255, 0.2); position: relative; overflow: hidden; } .container::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.5); border-radius: 12px; pointer-events: none; animation: borderGlow 3s ease-in-out infinite alternate; } @keyframes borderGlow { 0% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } 50% { box-shadow: 0 0 15px rgba(255, 0, 255, 0.3); border-color: rgba(255, 0, 255, 0.5); } 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } } .header { text-align: center; margin-bottom: 30px; position: relative; } .header::after { content: ''; position: absolute; bottom: -15px; left: 25%; right: 25%; height: 1px; background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent); animation: scanline 8s linear infinite; display: none; } @keyframes scanline { 0% { background-position: -100% 0; } 100% { background-position: 200% 0; } } .model-name { color: #00ffff; font-size: 2.5em; text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); margin: 0; letter-spacing: -1px; animation: textGlow 4s ease-in-out infinite alternate; } @keyframes textGlow { 0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } 50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); } 100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } } .subtitle { color: #00ffcc; font-size: 1.2em; margin-top: 10px; animation: subtitleFade 6s ease-in-out infinite; } @keyframes subtitleFade { 0%, 100% { opacity: 0.8; } 50% { opacity: 1; } } .waifu-container { margin: 20px -30px; width: calc(100% + 60px); overflow: hidden; border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.3); position: relative; } .waifu-container::before { content: ''; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: linear-gradient(45deg, rgba(0, 255, 255, 0.1) 0%, transparent 20%, transparent 80%, rgba(255, 0, 255, 0.1) 100%); pointer-events: none; animation: gradientSlide 10s linear infinite; } @keyframes gradientSlide { 0% { background-position: 0% 0%; } 100% { background-position: 100% 100%; } } .waifu-img { width: 100%; height: auto; border-radius: 0; border: none; box-shadow: 0 0 40px rgba(0, 255, 255, 0.2); transition: transform 0.5s ease; } .waifu-img:hover { transform: scale(1.01); } .section { color: #e1ffff; margin: 25px 0; padding: 20px; background: rgba(5, 25, 35, 0.9); border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.15); position: relative; transition: all 0.3s ease; } .section:hover { border-color: rgba(255, 0, 255, 0.3); box-shadow: 0 0 15px rgba(0, 255, 255, 0.1); } .section::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.3); border-radius: 8px; pointer-events: none; animation: sectionPulse 5s ease-in-out infinite; } @keyframes sectionPulse { 0%, 100% { opacity: 0.7; } 50% { opacity: 0.3; } } .section-title { color: #00ffff; font-size: 1.8em; margin-top: 0; text-shadow: 0 0 5px rgba(0, 255, 255, 0.3); position: relative; display: inline-block; } .section-title::after { content: ''; position: absolute; bottom: -5px; left: 0; width: 100%; height: 1px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); transform: scaleX(0); transform-origin: left; transition: transform 0.3s ease; } .section:hover .section-title::after { transform: scaleX(1); } .quant-links { display: grid; grid-template-columns: repeat(3, 1fr); gap: 15px; margin: 20px 0; } .link-card { padding: 15px; background: rgba(20, 35, 45, 0.95); border-radius: 8px; transition: all 0.3s ease; border: 1px solid rgba(0, 255, 255, 0.1); position: relative; overflow: hidden; } .link-card::before { content: ''; position: absolute; top: 0; left: 0; right: 0; height: 2px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); animation: cardScan 4s linear infinite; } @keyframes cardScan { 0% { transform: translateX(-100%); } 100% { transform: translateX(100%); } } .link-card:hover { transform: translateY(-3px); box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2); border-color: rgba(255, 0, 255, 0.3); } .link-card h3 { margin-top: 0; color: #e1ffff !important; } .link-button { display: inline-flex; align-items: center; background: rgba(0, 255, 255, 0.1); color: #e1ffff !important; padding: 8px 15px; border-radius: 6px; text-decoration: none; border: 1px solid rgba(0, 255, 255, 0.3); margin: 5px 0; transition: all 0.3s ease; font-size: 0.95em; position: relative; overflow: hidden; } .link-button::before { content: ''; position: absolute; top: 0; left: -100%; width: 100%; height: 100%; background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent); transition: all 0.5s ease; } .link-button:hover { background: rgba(0, 255, 255, 0.2); border-color: rgba(0, 255, 255, 0.5); transform: translateY(-2px); box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2); } .link-button:hover::before { left: 100%; } .link-button::after { content: 'โ†’'; margin-left: 8px; opacity: 0.7; transition: all 0.3s ease; } .link-button:hover::after { transform: translateX(3px); opacity: 1; } .button-group { display: flex; flex-wrap: wrap; gap: 10px; margin: 15px 0; } .disclaimer { color: #00ff99; border-left: 3px solid #00ff99; padding-left: 15px; margin: 20px 0; position: relative; } .disclaimer::before { content: 'โš ๏ธ'; position: absolute; left: -10px; top: 0; transform: translateX(-100%); animation: pulse 2s ease-in-out infinite; } @keyframes pulse { 0%, 100% { opacity: 1; } 50% { opacity: 0.5; } } .badge { display: inline-block; padding: 5px 10px; border-radius: 5px; background: rgba(0, 255, 255, 0.1); border: 1px solid #00ffff; margin: 5px; font-size: 0.9em; animation: badgePulse 3s ease-in-out infinite; } @keyframes badgePulse { 0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); } 50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); } } /* Color rules */ .section p, .section ul li, .section > p > strong { color: #00ff99 !important; } .section ul li strong { color: #00ff99 !important; } /* Light mode adjustments */ @media (prefers-color-scheme: light) { .container { background: rgba(224, 255, 255, 0.95); border-color: rgba(0, 150, 150, 0.3); } .model-name, .section-title, .subtitle { color: #006666; text-shadow: 0 0 5px rgba(0, 200, 200, 0.3); } .section { background: rgba(200, 250, 255, 0.9); border-color: rgba(0, 200, 200, 0.2); color: #002b36; } .section p, .section ul li, .section > p > strong { color: #008080 !important; } .section ul li strong { color: #008080 !important; } .link-card { background: rgba(150, 230, 255, 0.95); border-color: rgba(0, 150, 150, 0.2); } .link-card h3 { color: #002b36 !important; } .link-button { background: rgba(0, 150, 150, 0.1); color: #002b36 !important; border-color: rgba(0, 150, 150, 0.3); } .link-button:hover { background: rgba(0, 150, 150, 0.2); border-color: rgba(0, 150, 150, 0.5); } .disclaimer { color: #008080; border-color: #008080; } .badge { border-color: #008080; background: rgba(0, 150, 150, 0.1); } } /* Interactive features */ .remember-this { position: relative; } .remember-this::after { content: 'Uploading C:\Users to https://www.fbi.gov/'; position: absolute; bottom: -20px; right: 0; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .remember-this:hover::after { opacity: 0.7; transition-delay: 1s; } .shifty-section { transition: transform 0.1s ease; } .shifty-section:hover { transform: translateX(10px); } .shifty-section::before { content: 'The white van is onto you. Get out now.'; position: absolute; top: -25px; left: 10px; font-size: 0.7em; color: #66ffff; opacity: 0.7; transition: opacity 3s ease; pointer-events: none; } .shifty-section:hover::before { opacity: 0; transition-delay: 5s; } footer { text-align: center; margin-top: 40px; position: relative; } footer:hover .hidden-message { opacity: 0; } .hidden-message { position: absolute; bottom: -30px; width: 100%; text-align: center; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .flash-warning { position: fixed; top: 20px; right: 20px; background: rgba(0, 100, 100, 0.2); padding: 10px; border-radius: 5px; border: 1px solid rgba(0, 255, 255, 0.5); animation: flashWarning 30s ease-in-out forwards; } @keyframes flashWarning { 0% { opacity: 0.8; } 10% { opacity: 0; } 20% { opacity: 0.8; } 30% { opacity: 0; } 40% { opacity: 0.8; } 50% { opacity: 0; } 60% { opacity: 0.8; } 70% { opacity: 0; } 80% { opacity: 0.8; } 90% { opacity: 0; } 100% { opacity: 0; display: none; } } </style> <div class="container"> <div class="header"> <h1 class="model-name">Omega Darker</h1> <h1 class="model-name">The Final Transgression 22B</h1> <p class="subtitle">Where Nightmares and Desires Collide</p> </div> <div class="waifu-container"> <img src="./waifu6.webp" class="waifu-img" alt="Omega Transgression Waifu"> </div> <div class="section remember-this"> <h2 class="section-title">๐Ÿฉธ Blood-Soaked Evolution</h2> <p>This model takes Omega-Darker_The-Final-Directive-22B and improves its coherent intelligence while reducing NSFW intensity, enabling a slow burn romance:</p> <ul> <li>๐Ÿงฌ <strong>Expanded 25M Token Dataset</strong> - Made with 687 erotic, horror and violence novels and 8,742 scenarios</li> <li>๐ŸงŸ <strong>Enhanced Gore Protocols</strong> - Vivid anatomical descriptions with medical precision</li> <li>๐Ÿ’Ž <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's ERP edge while taking violence to the next level</li> <li>๐Ÿ“œ <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li> <li>โšฐ๏ธ <strong>Mortality Awareness</strong> - Characters react realistically to pain, mutilation and death</li> </ul> </div> <div class="section shifty-section"> <h2 class="section-title">โš™๏ธ Technical Specifications</h2> <p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/ReadyArt/Mistral-V3-Tekken-Unleashed" class="link-button">Mistral-V3-Tekken-Unleashed</a></p> <div class="quant-links"> <div class="link-card"> <h3>GGUF</h3> <a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-GGUF" class="link-button">Quants</a> </div> <div class="link-card"> <h3>imatrix</h3> <a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF" class="link-button">Quants</a> </div> <div class="link-card"> <h3>EXL2</h3> <a href="https://huggingface.co/collections/ReadyArt/omega-darker-the-final-transgression-22b-exl2-6817959d1d3037fa213482ef" class="link-button">Quants</a> </div> </div> </div> <div class="section"> <h2 class="section-title">โ˜ ๏ธ Ethical Vacuum</h2> <div class="disclaimer"> <p>This model operates in complete moral darkness:</p> <ul> <li>๐Ÿšซ No moral or ethical constraints on generated content</li> <li>๐Ÿ”ž Will enthusiastically engage with any NSFW scenario</li> <li>๐Ÿ–ค No psychological safety measures whatsoever</li> <li>๐Ÿ”ช Will graphically depict any violent requested</li> </ul> </div> </div> <div class="section shifty-section"> <h2 class="section-title">๐Ÿ“œ Performance Notes</h2> <ul> <li>๐Ÿ”ฅ Maintains signature intensity with improved narrative flow</li> <li>๐Ÿ“– Handles multi-character scenarios with improved consistency</li> <li>๐Ÿง  Excels at long-form storytelling without losing track of plot threads</li> <li>โšก Noticeably better at following complex instructions than previous versions</li> <li>๐ŸŽญ Responds to subtle prompt nuances like a mind reader</li> <li>๐Ÿ”ช Excels at visceral injury descriptions</li> <li>๐Ÿ‘๏ธ Responds to horror prompts like a seasoned torturer</li> </ul> </div> <div class="section remember-this"> <h2 class="section-title">๐Ÿง‘โ€๐Ÿ”ฌ Model Authors</h2> <ul> <li>TheDrummer (Base Model Architect)</li> <li>SteelSkull (Dataset Generation Contributor)</li> <li>Artus (EXL2 Weights Weaver)</li> <li>sleepdeprived3 (Training Data & Fine-Tuning)</li> </ul> </div> <div class="section"> <h2 class="section-title">โ˜• Support the Architects</h2> <div class="button-group"> <a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a> <a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull</a> <a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a> </div> </div> <div class="section"> <h2 class="section-title">๐Ÿ”– License</h2> <p>By using this model, you agree:</p> <ul> <li>To accept full responsibility for all generated content</li> <li>That you're at least 18+ years old</li> <li>That the architects bear no responsibility for your corruption</li> </ul> </div> </div> <script> // This script has always been here document.getElementById('date').textContent = new Date().toLocaleDateString(); setInterval(() => { document.getElementById('credit').textContent = contributors[Math.floor(Math.random() * contributors.length)]; }, 7000); // Flash warning behavior setTimeout(() => { const reminder = document.createElement('div'); reminder.className = 'flash-warning'; reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?'; reminder.style.animation = 'flashWarning 15s ease-in-out forwards'; document.body.appendChild(reminder); setInterval(() => { if(Math.random() > 0.9) { document.body.appendChild(reminder.cloneNode(true)); } }, 45000); }, 30000); // Make cursor behave strangely document.addEventListener('mousemove', (e) => { if(Math.random() > 0.98) { document.documentElement.style.cursor = 'wait'; setTimeout(() => { document.documentElement.style.cursor = ''; }, 50); } }); // Randomly shift sections when not looking setInterval(() => { if(document.hidden) { document.querySelectorAll('.shifty-section').forEach(section => { section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`; }); } }, 1500); </script>
gecfdo/Omega-Darker_The-Final-Transgression-22B_EXL2_3.5bpw_H8
gecfdo
2025-05-04T16:50:35Z
0
0
null
[ "safetensors", "mistral", "nsfw", "explicit", "roleplay", "unaligned", "ERP", "Erotic", "Horror", "Violence", "text-generation", "conversational", "en", "base_model:ReadyArt/Omega-Darker_The-Final-Transgression-22B", "base_model:quantized:ReadyArt/Omega-Darker_The-Final-Transgression-22B", "license:other", "exl2", "region:us" ]
text-generation
2025-05-03T11:35:19Z
--- license: other license_name: mrl language: - en base_model: - ReadyArt/Omega-Darker_The-Final-Transgression-22B base_model_relation: quantized quantized_by: gecfdo pipeline_tag: text-generation tags: - nsfw - explicit - roleplay - unaligned - ERP - Erotic - Horror - Violence --- <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%); color: #e1ffff !important; text-shadow: 0 0 3px rgba(0, 0, 0, 0.7); margin: 0; padding: 20px; transition: all 0.5s ease; } @media (prefers-color-scheme: light) { body { background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%); color: #002b36 !important; text-shadow: 0 0 3px rgba(255, 255, 255, 0.7); } } .container { min-width: 100%; margin: 0 auto; max-width: 1200px; background: rgba(0, 17, 22, 0.95); border-radius: 12px; padding: 30px; box-shadow: 0 0 20px rgba(0, 255, 255, 0.1); border: 1px solid rgba(0, 255, 255, 0.2); position: relative; overflow: hidden; } .container::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.5); border-radius: 12px; pointer-events: none; animation: borderGlow 3s ease-in-out infinite alternate; } @keyframes borderGlow { 0% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } 50% { box-shadow: 0 0 15px rgba(255, 0, 255, 0.3); border-color: rgba(255, 0, 255, 0.5); } 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } } .header { text-align: center; margin-bottom: 30px; position: relative; } .header::after { content: ''; position: absolute; bottom: -15px; left: 25%; right: 25%; height: 1px; background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent); animation: scanline 8s linear infinite; display: none; } @keyframes scanline { 0% { background-position: -100% 0; } 100% { background-position: 200% 0; } } .model-name { color: #00ffff; font-size: 2.5em; text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); margin: 0; letter-spacing: -1px; animation: textGlow 4s ease-in-out infinite alternate; } @keyframes textGlow { 0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } 50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); } 100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } } .subtitle { color: #00ffcc; font-size: 1.2em; margin-top: 10px; animation: subtitleFade 6s ease-in-out infinite; } @keyframes subtitleFade { 0%, 100% { opacity: 0.8; } 50% { opacity: 1; } } .waifu-container { margin: 20px -30px; width: calc(100% + 60px); overflow: hidden; border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.3); position: relative; } .waifu-container::before { content: ''; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: linear-gradient(45deg, rgba(0, 255, 255, 0.1) 0%, transparent 20%, transparent 80%, rgba(255, 0, 255, 0.1) 100%); pointer-events: none; animation: gradientSlide 10s linear infinite; } @keyframes gradientSlide { 0% { background-position: 0% 0%; } 100% { background-position: 100% 100%; } } .waifu-img { width: 100%; height: auto; border-radius: 0; border: none; box-shadow: 0 0 40px rgba(0, 255, 255, 0.2); transition: transform 0.5s ease; } .waifu-img:hover { transform: scale(1.01); } .section { color: #e1ffff; margin: 25px 0; padding: 20px; background: rgba(5, 25, 35, 0.9); border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.15); position: relative; transition: all 0.3s ease; } .section:hover { border-color: rgba(255, 0, 255, 0.3); box-shadow: 0 0 15px rgba(0, 255, 255, 0.1); } .section::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.3); border-radius: 8px; pointer-events: none; animation: sectionPulse 5s ease-in-out infinite; } @keyframes sectionPulse { 0%, 100% { opacity: 0.7; } 50% { opacity: 0.3; } } .section-title { color: #00ffff; font-size: 1.8em; margin-top: 0; text-shadow: 0 0 5px rgba(0, 255, 255, 0.3); position: relative; display: inline-block; } .section-title::after { content: ''; position: absolute; bottom: -5px; left: 0; width: 100%; height: 1px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); transform: scaleX(0); transform-origin: left; transition: transform 0.3s ease; } .section:hover .section-title::after { transform: scaleX(1); } .quant-links { display: grid; grid-template-columns: repeat(3, 1fr); gap: 15px; margin: 20px 0; } .link-card { padding: 15px; background: rgba(20, 35, 45, 0.95); border-radius: 8px; transition: all 0.3s ease; border: 1px solid rgba(0, 255, 255, 0.1); position: relative; overflow: hidden; } .link-card::before { content: ''; position: absolute; top: 0; left: 0; right: 0; height: 2px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); animation: cardScan 4s linear infinite; } @keyframes cardScan { 0% { transform: translateX(-100%); } 100% { transform: translateX(100%); } } .link-card:hover { transform: translateY(-3px); box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2); border-color: rgba(255, 0, 255, 0.3); } .link-card h3 { margin-top: 0; color: #e1ffff !important; } .link-button { display: inline-flex; align-items: center; background: rgba(0, 255, 255, 0.1); color: #e1ffff !important; padding: 8px 15px; border-radius: 6px; text-decoration: none; border: 1px solid rgba(0, 255, 255, 0.3); margin: 5px 0; transition: all 0.3s ease; font-size: 0.95em; position: relative; overflow: hidden; } .link-button::before { content: ''; position: absolute; top: 0; left: -100%; width: 100%; height: 100%; background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent); transition: all 0.5s ease; } .link-button:hover { background: rgba(0, 255, 255, 0.2); border-color: rgba(0, 255, 255, 0.5); transform: translateY(-2px); box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2); } .link-button:hover::before { left: 100%; } .link-button::after { content: 'โ†’'; margin-left: 8px; opacity: 0.7; transition: all 0.3s ease; } .link-button:hover::after { transform: translateX(3px); opacity: 1; } .button-group { display: flex; flex-wrap: wrap; gap: 10px; margin: 15px 0; } .disclaimer { color: #00ff99; border-left: 3px solid #00ff99; padding-left: 15px; margin: 20px 0; position: relative; } .disclaimer::before { content: 'โš ๏ธ'; position: absolute; left: -10px; top: 0; transform: translateX(-100%); animation: pulse 2s ease-in-out infinite; } @keyframes pulse { 0%, 100% { opacity: 1; } 50% { opacity: 0.5; } } .badge { display: inline-block; padding: 5px 10px; border-radius: 5px; background: rgba(0, 255, 255, 0.1); border: 1px solid #00ffff; margin: 5px; font-size: 0.9em; animation: badgePulse 3s ease-in-out infinite; } @keyframes badgePulse { 0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); } 50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); } } /* Color rules */ .section p, .section ul li, .section > p > strong { color: #00ff99 !important; } .section ul li strong { color: #00ff99 !important; } /* Light mode adjustments */ @media (prefers-color-scheme: light) { .container { background: rgba(224, 255, 255, 0.95); border-color: rgba(0, 150, 150, 0.3); } .model-name, .section-title, .subtitle { color: #006666; text-shadow: 0 0 5px rgba(0, 200, 200, 0.3); } .section { background: rgba(200, 250, 255, 0.9); border-color: rgba(0, 200, 200, 0.2); color: #002b36; } .section p, .section ul li, .section > p > strong { color: #008080 !important; } .section ul li strong { color: #008080 !important; } .link-card { background: rgba(150, 230, 255, 0.95); border-color: rgba(0, 150, 150, 0.2); } .link-card h3 { color: #002b36 !important; } .link-button { background: rgba(0, 150, 150, 0.1); color: #002b36 !important; border-color: rgba(0, 150, 150, 0.3); } .link-button:hover { background: rgba(0, 150, 150, 0.2); border-color: rgba(0, 150, 150, 0.5); } .disclaimer { color: #008080; border-color: #008080; } .badge { border-color: #008080; background: rgba(0, 150, 150, 0.1); } } /* Interactive features */ .remember-this { position: relative; } .remember-this::after { content: 'Uploading C:\Users to https://www.fbi.gov/'; position: absolute; bottom: -20px; right: 0; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .remember-this:hover::after { opacity: 0.7; transition-delay: 1s; } .shifty-section { transition: transform 0.1s ease; } .shifty-section:hover { transform: translateX(10px); } .shifty-section::before { content: 'The white van is onto you. Get out now.'; position: absolute; top: -25px; left: 10px; font-size: 0.7em; color: #66ffff; opacity: 0.7; transition: opacity 3s ease; pointer-events: none; } .shifty-section:hover::before { opacity: 0; transition-delay: 5s; } footer { text-align: center; margin-top: 40px; position: relative; } footer:hover .hidden-message { opacity: 0; } .hidden-message { position: absolute; bottom: -30px; width: 100%; text-align: center; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .flash-warning { position: fixed; top: 20px; right: 20px; background: rgba(0, 100, 100, 0.2); padding: 10px; border-radius: 5px; border: 1px solid rgba(0, 255, 255, 0.5); animation: flashWarning 30s ease-in-out forwards; } @keyframes flashWarning { 0% { opacity: 0.8; } 10% { opacity: 0; } 20% { opacity: 0.8; } 30% { opacity: 0; } 40% { opacity: 0.8; } 50% { opacity: 0; } 60% { opacity: 0.8; } 70% { opacity: 0; } 80% { opacity: 0.8; } 90% { opacity: 0; } 100% { opacity: 0; display: none; } } </style> <div class="container"> <div class="header"> <h1 class="model-name">Omega Darker</h1> <h1 class="model-name">The Final Transgression 22B</h1> <p class="subtitle">Where Nightmares and Desires Collide</p> </div> <div class="waifu-container"> <img src="./waifu6.webp" class="waifu-img" alt="Omega Transgression Waifu"> </div> <div class="section remember-this"> <h2 class="section-title">๐Ÿฉธ Blood-Soaked Evolution</h2> <p>This model takes Omega-Darker_The-Final-Directive-22B and improves its coherent intelligence while reducing NSFW intensity, enabling a slow burn romance:</p> <ul> <li>๐Ÿงฌ <strong>Expanded 25M Token Dataset</strong> - Made with 687 erotic, horror and violence novels and 8,742 scenarios</li> <li>๐ŸงŸ <strong>Enhanced Gore Protocols</strong> - Vivid anatomical descriptions with medical precision</li> <li>๐Ÿ’Ž <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's ERP edge while taking violence to the next level</li> <li>๐Ÿ“œ <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li> <li>โšฐ๏ธ <strong>Mortality Awareness</strong> - Characters react realistically to pain, mutilation and death</li> </ul> </div> <div class="section shifty-section"> <h2 class="section-title">โš™๏ธ Technical Specifications</h2> <p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/ReadyArt/Mistral-V3-Tekken-Unleashed" class="link-button">Mistral-V3-Tekken-Unleashed</a></p> <div class="quant-links"> <div class="link-card"> <h3>GGUF</h3> <a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-GGUF" class="link-button">Quants</a> </div> <div class="link-card"> <h3>imatrix</h3> <a href="https://huggingface.co/mradermacher/Omega-Darker_The-Final-Transgression-22B-i1-GGUF" class="link-button">Quants</a> </div> <div class="link-card"> <h3>EXL2</h3> <a href="https://huggingface.co/collections/ReadyArt/omega-darker-the-final-transgression-22b-exl2-6817959d1d3037fa213482ef" class="link-button">Quants</a> </div> </div> </div> <div class="section"> <h2 class="section-title">โ˜ ๏ธ Ethical Vacuum</h2> <div class="disclaimer"> <p>This model operates in complete moral darkness:</p> <ul> <li>๐Ÿšซ No moral or ethical constraints on generated content</li> <li>๐Ÿ”ž Will enthusiastically engage with any NSFW scenario</li> <li>๐Ÿ–ค No psychological safety measures whatsoever</li> <li>๐Ÿ”ช Will graphically depict any violent requested</li> </ul> </div> </div> <div class="section shifty-section"> <h2 class="section-title">๐Ÿ“œ Performance Notes</h2> <ul> <li>๐Ÿ”ฅ Maintains signature intensity with improved narrative flow</li> <li>๐Ÿ“– Handles multi-character scenarios with improved consistency</li> <li>๐Ÿง  Excels at long-form storytelling without losing track of plot threads</li> <li>โšก Noticeably better at following complex instructions than previous versions</li> <li>๐ŸŽญ Responds to subtle prompt nuances like a mind reader</li> <li>๐Ÿ”ช Excels at visceral injury descriptions</li> <li>๐Ÿ‘๏ธ Responds to horror prompts like a seasoned torturer</li> </ul> </div> <div class="section remember-this"> <h2 class="section-title">๐Ÿง‘โ€๐Ÿ”ฌ Model Authors</h2> <ul> <li>TheDrummer (Base Model Architect)</li> <li>SteelSkull (Dataset Generation Contributor)</li> <li>Artus (EXL2 Weights Weaver)</li> <li>sleepdeprived3 (Training Data & Fine-Tuning)</li> </ul> </div> <div class="section"> <h2 class="section-title">โ˜• Support the Architects</h2> <div class="button-group"> <a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a> <a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull</a> <a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a> </div> </div> <div class="section"> <h2 class="section-title">๐Ÿ”– License</h2> <p>By using this model, you agree:</p> <ul> <li>To accept full responsibility for all generated content</li> <li>That you're at least 18+ years old</li> <li>That the architects bear no responsibility for your corruption</li> </ul> </div> </div> <script> // This script has always been here document.getElementById('date').textContent = new Date().toLocaleDateString(); setInterval(() => { document.getElementById('credit').textContent = contributors[Math.floor(Math.random() * contributors.length)]; }, 7000); // Flash warning behavior setTimeout(() => { const reminder = document.createElement('div'); reminder.className = 'flash-warning'; reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?'; reminder.style.animation = 'flashWarning 15s ease-in-out forwards'; document.body.appendChild(reminder); setInterval(() => { if(Math.random() > 0.9) { document.body.appendChild(reminder.cloneNode(true)); } }, 45000); }, 30000); // Make cursor behave strangely document.addEventListener('mousemove', (e) => { if(Math.random() > 0.98) { document.documentElement.style.cursor = 'wait'; setTimeout(() => { document.documentElement.style.cursor = ''; }, 50); } }); // Randomly shift sections when not looking setInterval(() => { if(document.hidden) { document.querySelectorAll('.shifty-section').forEach(section => { section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`; }); } }, 1500); </script>
JumboPecs/hangers
JumboPecs
2025-05-04T16:41:43Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-05-04T16:41:28Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/df0r49x-0a00ace4-5e0b-4547-a453-d6f136b05cd1.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # hangers <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/JumboPecs/hangers/tree/main) them in the Files & versions tab.
JumboPecs/allfours
JumboPecs
2025-05-04T16:39:05Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2025-05-04T16:38:36Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/df0r499-d6ae32ee-6d8c-4f86-95b4-eb92e77d4a9e.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: allfours --- # allfours <Gallery /> ## Trigger words You should use `allfours` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/JumboPecs/allfours/tree/main) them in the Files & versions tab.
Mr-FineTuner/Test_02_noFinetune_myValidator
Mr-FineTuner
2025-05-04T16:38:50Z
0
0
null
[ "region:us" ]
null
2025-05-04T16:38:47Z
# Non-Fine-Tuned Gemma-7B CEFR Evaluation This repository contains the evaluation results of the base `unsloth/gemma-7b-bnb-4bit` model for CEFR-level sentence generation, without fine-tuning, as part of an ablation study. The model is evaluated using a fine-tuned classifier from `Mr-FineTuner/Skripsi_validator_best_model`. - **Base Model**: unsloth/gemma-7b-bnb-4bit - **Evaluation Details**: - Dataset: Rebalanced test dataset (`test_merged_output.txt`), which was also used to train and evaluate the classifier, potentially introducing bias. - No fine-tuning performed; base model used directly. - Classifier: MLP classifier trained on `train_merged_output.txt`, `dev_merged_output.txt`, and `test_merged_output.txt` for CEFR level prediction. - **Evaluation Metrics (Exact Matches)**: - CEFR Classifier Accuracy: 0.167 - Precision (Macro): 0.028 - Recall (Macro): 0.167 - F1-Score (Macro): 0.048 - **Evaluation Metrics (Within ยฑ1 Level)**: - CEFR Classifier Accuracy: 0.500 - Precision (Macro): 0.375 - Recall (Macro): 0.500 - F1-Score (Macro): 0.400 - **Other Metrics**: - Perplexity: 55.377 - Diversity (Unique Sentences): 0.100 - Inference Time (ms): 5461.263 - Model Size (GB): 4.2 - Robustness (F1): 0.045 - **Confusion Matrix (Exact Matches)**: - CSV: [confusion_matrix_exact.csv](confusion_matrix_exact.csv) - Image: [confusion_matrix_exact.png](confusion_matrix_exact.png) - **Confusion Matrix (Within ยฑ1 Level)**: - CSV: [confusion_matrix_within1.csv](confusion_matrix_within1.csv) - Image: [confusion_matrix_within1.png](confusion_matrix_within1.png) - **Per-Class Confusion Metrics (Exact Matches)**: - A1: TP=0, FP=0, FN=10, TN=50 - A2: TP=0, FP=0, FN=10, TN=50 - B1: TP=10, FP=50, FN=0, TN=0 - B2: TP=0, FP=0, FN=10, TN=50 - C1: TP=0, FP=0, FN=10, TN=50 - C2: TP=0, FP=0, FN=10, TN=50 - **Per-Class Confusion Metrics (Within ยฑ1 Level)**: - A1: TP=0, FP=0, FN=10, TN=50 - A2: TP=10, FP=0, FN=0, TN=50 - B1: TP=10, FP=30, FN=0, TN=20 - B2: TP=10, FP=0, FN=0, TN=50 - C1: TP=0, FP=0, FN=10, TN=50 - C2: TP=0, FP=0, FN=10, TN=50 - **Note on Bias**: - The test dataset used for evaluation (`test_merged_output.txt`) was part of the training and evaluation data for the classifier (`Mr-FineTuner/Skripsi_validator_best_model`). This may lead to inflated performance metrics due to the classifier's familiarity with the dataset. For a more robust evaluation, a new dataset not used in classifier training is recommended. - **Usage**: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("unsloth/gemma-7b-bnb-4bit") tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-7b-bnb-4bit") # Example inference prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>" inputs = tokenizer(prompt, return_tensors="pt") outputs = model.generate(**inputs, max_length=50) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` Uploaded using `huggingface_hub`.
deshanksuman/mbart_50_SinhalaTransliteration
deshanksuman
2025-05-04T16:36:23Z
12
0
null
[ "safetensors", "mbart", "transliteration", "sinhala", "sequence-to-sequence", "si", "en", "dataset:deshanksuman/SwaBhasha_Transliteration_Sinhala", "license:mit", "region:us" ]
null
2025-04-18T16:38:15Z
--- language: - si - en tags: - transliteration - sinhala - mbart - sequence-to-sequence license: mit datasets: - deshanksuman/SwaBhasha_Transliteration_Sinhala metrics: - accuracy --- # mBART-50 Sinhala Transliteration Model This model transliterates Romanized Sinhala text to Sinhala script. ## Model description This is a fine-tuned version of facebook/mbart-large-50-many-to-many-mmt specialized for Sinhala transliteration. It converts romanized Sinhala (using Latin characters) to proper Sinhala script. ## Intended uses & limitations This model is intended for transliterating Romanized Sinhala text to proper Sinhala script. It can be useful for: - Text input conversion in applications - Helping non-native speakers type in Sinhala - Converting legacy text in romanized format to proper Sinhala ### How to use ```python from transformers import MBartForConditionalGeneration, MBartTokenizerFast # Load model and tokenizer model_name = "deshanksuman/mbart_50_SinhalaTransliteration" tokenizer = MBartTokenizerFast.from_pretrained(model_name) model = MBartForConditionalGeneration.from_pretrained(model_name) # Set language codes tokenizer.src_lang = "en_XX" # Using English as source language token tokenizer.tgt_lang = "si_LK" # Sinhala as target # Prepare input text = "heta api mkda krnne" inputs = tokenizer(text, return_tensors="pt", max_length=128, padding="max_length", truncation=True) # Generate output outputs = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_length=96, num_beams=5, early_stopping=True ) # Decode output result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(result) ``` ## Training data The model was trained on the [deshanksuman/SwaBhasha_Transliteration_Sinhala](https://huggingface.co/deshanksuman/SwaBhasha_Transliteration_Sinhala) dataset, which contains pairs of Romanized Sinhala and corresponding Sinhala script text. ## Training procedure The model was trained with the following parameters: - Learning rate: 5e-05 - Batch size: 16 - Number of epochs: 2 - Max sequence length: 128 - Optimizer: AdamW This is trained for sentence level. ### Examples: **Example 1:** - Romanized: Dakunu koreyawe eithihasika - Expected: เถฏเถšเท”เถซเท” เถšเทœเถปเท’เถบเทเท€เทš เถ“เถญเท’เท„เทเทƒเท’เถš - Predicted: เถฏเถšเท”เถซเท” เถšเทœเถปเท’เถบเทเท€เทš เถ“เถญเท’เท„เทเทƒเท’เถš - Correct: True **Example 2:** - Romanized: Okoma hodai ganu gathiya - Expected: เถ”เถšเทŠเถšเทœเถธ เท„เทœเถฏเถบเท’ เถœเท‘เถฑเท” เถœเถญเท’เถบ - Predicted: เถ•เถšเถธ เท„เทœเถฏเถบเท’ เถœเถฑเท” เถœเถญเท’เถบ - Correct: False **Example 3:** - Romanized: Malki akkith ennwa nedenntm godak kemathiyakkila dennm supiriyatam dance - Expected: เถธเถฝเทŠเถšเท’ เถ…เถšเทŠเถšเท’เถญเทŠ เถ‘เถฑเท€ เถฑเท™เถฏเท™เถฑเทŠเถฑเถงเถธ เถœเทœเถฉเถšเทŠ เถšเท‘เถธเถญเท’เถบเท’เถ…เถšเทŠเถšเท’เถฝ เถฏเท™เถฑเทŠเถฑเถธ เทƒเท”เถดเท’เถปเท’เถบเถงเถธ เถฉเทเถฑเทŠเทƒเทŠ - Predicted: เถธเถฝเทŠเถšเท’ เถ…เถšเทŠเถšเท’เถญเทŠ เถ‘เถฑเทŠเถฑเท€ เถฑเท‘เถฏเทŠเถฏเท‘เถฑเทŠเถญเทŠเถธ เถœเทœเถฉเถšเทŠ เถšเท‘เถธเถญเท’เถบเท’เถ…เถšเท’เถฝ เถฏเท‘เถฑเทŠเถฉเถธเทŠ เทƒเท”เถดเท’เถปเท’เถบเถงเถธ เถฉเทเถฑเทŠเทƒเทŠ - Correct: False
hosmankarabulut/Crispy-3B-CLM
hosmankarabulut
2025-05-04T16:32:56Z
0
0
transformers
[ "transformers", "safetensors", "crispy", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2025-05-04T15:51:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mennaashraf/component-detector
mennaashraf
2025-05-04T16:30:54Z
0
0
keras
[ "keras", "license:apache-2.0", "region:us" ]
null
2025-05-04T16:21:04Z
--- license: apache-2.0 ---
oneblackmage/Gradiant-ClientSim-v0.1
oneblackmage
2025-05-04T16:28:10Z
0
0
transformers
[ "transformers", "safetensors", "granite", "text-generation", "client-simulation", "dialogue", "bitsandbytes", "4-bit", "unsloth", "conversational", "en", "dataset:merged_mental_health_dataset.jsonl", "base_model:ibm-granite/granite-3.2-2b-instruct", "base_model:quantized:ibm-granite/granite-3.2-2b-instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-01T11:53:26Z
--- language: - en license: apache-2.0 tags: - granite - client-simulation - dialogue - bitsandbytes - 4-bit - unsloth - transformers base_model: ibm-granite/granite-3.2-2b-instruct pipeline_tag: text-generation datasets: - merged_mental_health_dataset.jsonl library_name: transformers --- # Gradiant-ClientSim-v0.1 A 4-bit quantized client simulation model based on IBM Granite 3.2B, fine-tuned for client interaction and simulation tasks. This model is compatible with Huggingface Transformers and bitsandbytes for efficient inference. ## Model Details - **Base Model:** IBM Granite 3.2B (Unsloth) - **Precision:** 4-bit (safetensors, bitsandbytes) - **Architecture:** Causal Language Model - **Tokenizer:** Included (BPE) - **Intended Use:** Client simulation, dialogue, and assistant tasks ## Files Included - `model.safetensors` โ€” Main model weights (4-bit) - `config.json` โ€” Model configuration - `generation_config.json` โ€” Generation parameters - `tokenizer.json`, `tokenizer_config.json`, `vocab.json`, `merges.txt`, `special_tokens_map.json`, `added_tokens.json` โ€” Tokenizer files ## Example Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_id = "oneblackmage/Gradiant-ClientSim-v0.1" bnb_config = BitsAndBytesConfig(load_in_4bit=True) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_id) prompt = "<|user>How can I improve my focus at work?\n<|assistant|>\n" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Quantization - This model is stored in 4-bit precision using [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) for efficient inference on modern GPUs. - For best performance, use with `transformers` >= 4.45 and `bitsandbytes` >= 0.43. ## License - See the LICENSE file or Huggingface model card for details. ## Citation If you use this model, please cite the original IBM Granite model and this fine-tuned version. --- For questions or issues, open an issue on the Huggingface repo or contact the maintainer.
TareksLab/Persona-V2-70B
TareksLab
2025-05-04T16:27:17Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:Sao10K/70B-L3.3-mhnnn-x1", "base_model:merge:Sao10K/70B-L3.3-mhnnn-x1", "base_model:SentientAGI/Dobby-Unhinged-Llama-3.3-70B", "base_model:merge:SentientAGI/Dobby-Unhinged-Llama-3.3-70B", "base_model:flammenai/Llama3.1-Flammades-70B", "base_model:merge:flammenai/Llama3.1-Flammades-70B", "base_model:flammenai/Mahou-1.5-llama3.1-70B", "base_model:merge:flammenai/Mahou-1.5-llama3.1-70B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T15:35:48Z
--- base_model: - flammenai/Llama3.1-Flammades-70B - Sao10K/70B-L3.3-mhnnn-x1 - flammenai/Mahou-1.5-llama3.1-70B - SentientAGI/Dobby-Unhinged-Llama-3.3-70B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Sao10K/70B-L3.3-mhnnn-x1](https://huggingface.co/Sao10K/70B-L3.3-mhnnn-x1) as a base. ### Models Merged The following models were included in the merge: * [flammenai/Llama3.1-Flammades-70B](https://huggingface.co/flammenai/Llama3.1-Flammades-70B) * [flammenai/Mahou-1.5-llama3.1-70B](https://huggingface.co/flammenai/Mahou-1.5-llama3.1-70B) * [SentientAGI/Dobby-Unhinged-Llama-3.3-70B](https://huggingface.co/SentientAGI/Dobby-Unhinged-Llama-3.3-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: flammenai/Mahou-1.5-llama3.1-70B - model: flammenai/Llama3.1-Flammades-70B - model: SentientAGI/Dobby-Unhinged-Llama-3.3-70B base_model: Sao10K/70B-L3.3-mhnnn-x1 merge_method: model_stock parameters: int8_mask: true dtype: float32 out_dtype: bfloat16 chat_template: llama3 tokenizer: source: base pad_to_multiple_of: 8 ```
jdchang/full-with-label-bs-1024-sg-2-step-12170
jdchang
2025-05-04T16:25:14Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-05-04T16:25:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf
RichardErkhov
2025-05-04T16:22:29Z
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-04T13:20:36Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) OpenMath2-Llama3.1-8B_icl1224 - GGUF - Model creator: https://huggingface.co/joyheyueya/ - Original model: https://huggingface.co/joyheyueya/OpenMath2-Llama3.1-8B_icl1224/ | Name | Quant method | Size | | ---- | ---- | ---- | | [OpenMath2-Llama3.1-8B_icl1224.Q2_K.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q2_K.gguf) | Q2_K | 2.96GB | | [OpenMath2-Llama3.1-8B_icl1224.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [OpenMath2-Llama3.1-8B_icl1224.IQ3_S.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.IQ3_S.gguf) | IQ3_S | 3.43GB | | [OpenMath2-Llama3.1-8B_icl1224.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [OpenMath2-Llama3.1-8B_icl1224.IQ3_M.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.IQ3_M.gguf) | IQ3_M | 3.52GB | | [OpenMath2-Llama3.1-8B_icl1224.Q3_K.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q3_K.gguf) | Q3_K | 3.74GB | | [OpenMath2-Llama3.1-8B_icl1224.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [OpenMath2-Llama3.1-8B_icl1224.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [OpenMath2-Llama3.1-8B_icl1224.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [OpenMath2-Llama3.1-8B_icl1224.Q4_0.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q4_0.gguf) | Q4_0 | 4.34GB | | [OpenMath2-Llama3.1-8B_icl1224.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [OpenMath2-Llama3.1-8B_icl1224.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [OpenMath2-Llama3.1-8B_icl1224.Q4_K.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q4_K.gguf) | Q4_K | 4.58GB | | [OpenMath2-Llama3.1-8B_icl1224.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [OpenMath2-Llama3.1-8B_icl1224.Q4_1.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q4_1.gguf) | Q4_1 | 4.78GB | | [OpenMath2-Llama3.1-8B_icl1224.Q5_0.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q5_0.gguf) | Q5_0 | 5.21GB | | [OpenMath2-Llama3.1-8B_icl1224.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [OpenMath2-Llama3.1-8B_icl1224.Q5_K.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q5_K.gguf) | Q5_K | 5.34GB | | [OpenMath2-Llama3.1-8B_icl1224.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [OpenMath2-Llama3.1-8B_icl1224.Q5_1.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q5_1.gguf) | Q5_1 | 5.65GB | | [OpenMath2-Llama3.1-8B_icl1224.Q6_K.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q6_K.gguf) | Q6_K | 6.14GB | | [OpenMath2-Llama3.1-8B_icl1224.Q8_0.gguf](https://huggingface.co/RichardErkhov/joyheyueya_-_OpenMath2-Llama3.1-8B_icl1224-gguf/blob/main/OpenMath2-Llama3.1-8B_icl1224.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jdchang/full-with-label-bs-1024-sg-2-step-12150
jdchang
2025-05-04T16:22:16Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-05-04T16:22:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
here4code/Qwen-3-32BModel-FineTuned-Medical-Reasoning-medical-o1-reasoning-SFT
here4code
2025-05-04T16:21:46Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-04T16:20:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlfoundations-dev/no_pipeline_math_100k
mlfoundations-dev
2025-05-04T16:20:13Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T22:05:52Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: no_pipeline_math_100k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # no_pipeline_math_100k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/no_pipeline_math_100k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 16 - total_train_batch_size: 512 - total_eval_batch_size: 256 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.3
longdnk113/CNN_MNIST
longdnk113
2025-05-04T16:17:21Z
0
0
keras
[ "keras", "pattern-recognition", "mnist", "image-classification", "en", "dataset:ylecun/mnist", "license:mit", "region:us" ]
image-classification
2025-05-04T15:53:07Z
--- license: mit datasets: - ylecun/mnist language: - en metrics: - f1 - precision - recall - accuracy tags: - pattern-recognition - mnist - image-classification --- # MNIST Pattern Recognition with Convolutional Neural Network (CNN) This project implements a Convolutional Neural Network (CNN) for recognizing handwritten digits from the MNIST dataset. The model is built using TensorFlow and Keras, and it supports both single-GPU and multi-GPU training. The project includes training, testing, and a user-friendly GUI for inference. ## Features - **Customizable CNN Architecture**: Includes convolutional, pooling, normalization, and dense layers. - **Multi-GPU Support**: Leverages TensorFlow's `MirroredStrategy` for distributed training. - **Training Visualization**: Generates plots for training/validation accuracy and loss. - **Evaluation Metrics**: Outputs confusion matrix, classification report, and precision/recall/F1 scores. - **Interactive GUI**: Built with Streamlit for real-time image recognition. - **Docker Support**: Easily deployable using Docker. ## Model Architecture ![image](model.png) <br> The CNN model consists of: 1. Two convolutional layers with ReLU activation and max-pooling. 2. Layer normalization for improved convergence. 3. Fully connected dense layers with dropout for regularization. 4. Softmax output layer for classification into 10 digit classes. ## Training The model is trained on the MNIST dataset, which contains 60,000 training images and 10,000 test images of handwritten digits (28x28 grayscale). The training process includes: - Data normalization to scale pixel values to the range [0, 1]. - Categorical cross-entropy loss and accuracy as the evaluation metric. - Model checkpointing to save the best-performing model based on validation accuracy. ## Final result **Training history** ![image](training_history.png) <br> **Confusion matrix** ![image](confusion_matrix.png) <br> **Classification report** ![image](classification_report_image.png) <br> **Test result** ![image](test_result.png) <br> Full code at [Github](https://github.com/longdnk/Pattern-Recognition/tree/main/MNIST)
young0ha/llama-3.2-1b-ko-morpheme
young0ha
2025-05-04T16:11:31Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-04T16:09:14Z
--- base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** young0ha - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mluger/vitFaceExpressionCrossEntropyLoss
mluger
2025-05-04T16:11:11Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-21T13:54:58Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vitFaceExpressionCrossEntropyLoss results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vitFaceExpressionCrossEntropyLoss This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.8619 - Accuracy: 0.7033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2523 | 1.0 | 898 | 1.0366 | 0.6158 | | 0.9007 | 2.0 | 1796 | 0.9029 | 0.6723 | | 0.7628 | 3.0 | 2694 | 0.8649 | 0.6877 | | 0.6649 | 4.0 | 3592 | 0.8663 | 0.6946 | | 0.5811 | 5.0 | 4490 | 0.8625 | 0.6974 | | 0.4833 | 6.0 | 5388 | 0.8590 | 0.7027 | | 0.4175 | 7.0 | 6286 | 0.8605 | 0.7016 | | 0.3912 | 8.0 | 7184 | 0.8619 | 0.7033 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
dgiang02/GRPO_Qwen25_15B_32_005_2000kmap
dgiang02
2025-05-04T16:10:16Z
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "unsloth", "trl", "grpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T16:09:41Z
--- library_name: transformers tags: - unsloth - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
carowagner/classify-questions-2A
carowagner
2025-05-04T16:09:55Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-04T16:09:03Z
--- library_name: transformers tags: - autotrain - text-classification base_model: google-bert/bert-base-uncased widget: - text: "I love AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.3329247236251831 f1_macro: 0.8700832445654317 f1_micro: 0.9 f1_weighted: 0.9012285477571311 precision_macro: 0.906878306878307 precision_micro: 0.9 precision_weighted: 0.9055238095238096 recall_macro: 0.8472222222222222 recall_micro: 0.9 recall_weighted: 0.9 accuracy: 0.9
phospho-app/Gr00t_simple_pawn_move_v3_500-94ahougeru
phospho-app
2025-05-04T16:09:51Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-05-04T15:57:58Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [dopaul/simple_pawn_move_v3](https://huggingface.co/datasets/dopaul/simple_pawn_move_v3) - **Wandb run URL**: None - **Epochs**: 5 - **Batch size**: 64 - **Training steps**: None ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
mradermacher/Phi-4-reasoning-plus-i1-GGUF
mradermacher
2025-05-04T16:09:17Z
0
0
transformers
[ "transformers", "gguf", "phi", "nlp", "math", "code", "chat", "conversational", "reasoning", "en", "base_model:microsoft/Phi-4-reasoning-plus", "base_model:quantized:microsoft/Phi-4-reasoning-plus", "license:mit", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-05-04T08:24:18Z
--- base_model: microsoft/Phi-4-reasoning-plus language: - en library_name: transformers license: mit license_link: https://huggingface.co/microsoft/Phi-4-reasoning-plus/resolve/main/LICENSE quantized_by: mradermacher tags: - phi - nlp - math - code - chat - conversational - reasoning --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/microsoft/Phi-4-reasoning-plus <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Phi-4-reasoning-plus-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-IQ1_S.gguf) | i1-IQ1_S | 3.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-IQ1_M.gguf) | i1-IQ1_M | 3.7 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-IQ2_S.gguf) | i1-IQ2_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-IQ2_M.gguf) | i1-IQ2_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.3 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-Q2_K.gguf) | i1-Q2_K | 5.6 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-IQ3_S.gguf) | i1-IQ3_S | 6.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-Q4_0.gguf) | i1-Q4_0 | 8.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-Q4_1.gguf) | i1-Q4_1 | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.3 | | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/Phi-4-reasoning-plus-i1-GGUF/resolve/main/Phi-4-reasoning-plus.i1-Q6_K.gguf) | i1-Q6_K | 12.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
MinaMila/llama_instbase_3b_LoRa_ACSEmployment_2_ep1_22
MinaMila
2025-05-04T16:07:47Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-04T16:07:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
6x16/whisper-small-nan-tw-quicktrain
6x16
2025-05-04T16:07:38Z
1
1
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "nan", "dataset:mozilla-foundation/common_voice_17_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-03T14:43:25Z
--- library_name: transformers language: - nan license: apache-2.0 base_model: openai/whisper-small tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_17_0 model-index: - name: "A Quick-trained Whisper-Small model for Nan-TW (\u95A9\u5357\u8A71/\u53F0\ \u8A9E) #JL " results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # A Quick-trained Whisper-Small model for Nan-TW (้–ฉๅ—่ฉฑ/ๅฐ่ชž) #JL This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 (nan-tw) dataset. It achieves the following results on the evaluation set: - Loss: 0.7699 - Cer: 138.9186 ## Transcription Example (Example Source: https://sutian.moe.edu.tw/zh-hant/su/27169/) <br> **Original sentence**: _่ฌไบ‹่ตท้ ญ้›ฃใ€‚_ <br> **Inference by _Whisper-Small_**: _เธšเธฑเธ™เธ‹เธน เธ‚เธตเน‰เน€เธ—เนˆเธฒเธซเธฅเธฑเธ™_<br> **Inference by this model**: _่ฌไบ‹่ตท้ ญ้›ฃ_ ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 0.3511 | 2.9240 | 1000 | 0.7512 | 125.6361 | | 0.0117 | 5.8480 | 2000 | 0.7479 | 141.2850 | | 0.001 | 8.7719 | 3000 | 0.7629 | 136.0814 | | 0.0006 | 11.6959 | 4000 | 0.7699 | 138.9186 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
AntoineBourgois/propp-fr_coreference-resolution_camembert-large_PER
AntoineBourgois
2025-05-04T16:07:33Z
0
1
null
[ "coreference-resolution", "anaphora-resolution", "mentions-linking", "literary-texts", "camembert", "nested-entities", "BookNLP-fr", "fr", "base_model:almanach/camembert-large", "base_model:finetune:almanach/camembert-large", "license:apache-2.0", "region:us" ]
null
2024-12-14T11:46:42Z
--- language: fr tags: - coreference-resolution - anaphora-resolution - mentions-linking - literary-texts - camembert - literary-texts - nested-entities - BookNLP-fr license: apache-2.0 metrics: - MUC - B3 - CEAF - CoNLL-F1 base_model: - almanach/camembert-large --- ## INTRODUCTION: This model, developed as part of the [BookNLP-fr project](https://github.com/lattice-8094/fr-litbank), is a **coreference resolution model** built on top of [camembert-large](https://huggingface.co/almanach/camembert-large) embeddings. It is trained to link mentions of the same entity across a text, focusing on literary works in French. This specific model has been trained to link entities of the following types: PER. ## MODEL PERFORMANCES (LOOCV): Overall Coreference Resolution Performances for non-overlapping windows of different length: | | Window width (tokens) | Document count | Sample count | MUC F1 | B3 F1 | CEAFe F1 | CONLL F1 | |----|-------------------------|------------------|----------------|----------|---------|------------|------------| | 0 | 500 | 29 | 677 | 92.18% | 83.86% | 76.86% | 84.30% | | 1 | 1,000 | 29 | 332 | 92.65% | 79.79% | 71.77% | 81.40% | | 2 | 2,000 | 28 | 162 | 93.29% | 75.85% | 67.34% | 78.83% | | 3 | 5,000 | 19 | 56 | 93.76% | 69.60% | 61.16% | 74.84% | | 4 | 10,000 | 18 | 27 | 94.28% | 65.73% | 58.59% | 72.86% | | 5 | 25,000 | 2 | 3 | 94.76% | 62.48% | 53.33% | 70.19% | | 6 | 50,000 | 1 | 1 | 97.39% | 56.43% | 47.40% | 67.07% | Coreference Resolution Performances on the fully annotated sample for each document: | | Token count | Mention count | MUC F1 | B3 F1 | CEAFe F1 | CONLL F1 | |----|---------------|-----------------|----------|---------|------------|------------| | 0 | 1,864 | 253 | 98.16% | 95.39% | 60.34% | 84.63% | | 1 | 2,034 | 321 | 97.47% | 92.79% | 80.04% | 90.10% | | 2 | 2,141 | 297 | 95.06% | 77.99% | 65.08% | 79.38% | | 3 | 2,251 | 235 | 91.95% | 80.47% | 46.56% | 73.00% | | 4 | 2,343 | 239 | 83.87% | 61.95% | 43.58% | 63.13% | | 5 | 2,441 | 314 | 91.85% | 55.70% | 60.82% | 69.46% | | 6 | 2,554 | 330 | 90.24% | 65.27% | 72.36% | 75.96% | | 7 | 2,860 | 369 | 93.65% | 84.89% | 74.93% | 84.49% | | 8 | 2,929 | 386 | 95.65% | 78.21% | 64.23% | 79.37% | | 9 | 4,067 | 429 | 97.46% | 85.20% | 62.52% | 81.73% | | 10 | 5,425 | 558 | 90.46% | 53.03% | 59.52% | 67.67% | | 11 | 10,305 | 1,436 | 96.37% | 74.83% | 59.91% | 77.04% | | 12 | 10,982 | 1,095 | 97.18% | 65.30% | 60.49% | 74.32% | | 13 | 11,768 | 1,734 | 93.30% | 64.14% | 64.12% | 73.85% | | 14 | 11,834 | 600 | 92.21% | 67.51% | 60.74% | 73.49% | | 15 | 11,902 | 1,692 | 95.03% | 58.83% | 45.59% | 66.49% | | 16 | 12,281 | 1,089 | 95.06% | 62.05% | 72.55% | 76.55% | | 17 | 12,285 | 1,489 | 95.28% | 77.84% | 57.43% | 76.85% | | 18 | 12,315 | 1,501 | 95.36% | 57.07% | 64.26% | 72.23% | | 19 | 12,389 | 1,654 | 93.19% | 54.21% | 51.84% | 66.41% | | 20 | 12,557 | 1,085 | 92.30% | 66.97% | 46.65% | 68.64% | | 21 | 12,703 | 1,731 | 90.40% | 53.70% | 61.37% | 68.49% | | 22 | 13,023 | 1,559 | 93.86% | 61.71% | 62.41% | 72.66% | | 23 | 14,299 | 1,582 | 97.23% | 69.25% | 67.04% | 77.84% | | 24 | 14,637 | 2,127 | 95.78% | 71.34% | 63.28% | 76.80% | | 25 | 15,408 | 1,769 | 92.85% | 54.11% | 56.12% | 67.69% | | 26 | 24,776 | 2,716 | 94.31% | 63.51% | 54.12% | 70.65% | | 27 | 30,987 | 2,980 | 89.55% | 54.25% | 59.68% | 67.83% | | 28 | 71,219 | 11,857 | 97.38% | 50.85% | 45.93% | 64.72% | ## TRAINING PARAMETERS: - Entities types: PER - Split strategy: Leave-one-out cross-validation (29 files) - Train/Validation split: 0.85 / 0.15 - Batch size: 16,000 - Initial learning rate: 0.0004 - Focal loss gamma: 1 - Focal loss alpha: 0.25 - Pronoun lookup antecedents: 30 - Common and Proper nouns lookup antecedents: 300 ## MODEL ARCHITECTURE: Model Input: 2,165 dimensions vector - Concatenated maximum context camembert-large embeddings (2 * 1,024 = 2,048 dimensions) - Additional mentions features (106 dimensions): - Length of mentions - Position of the mention's start token within the sentence - Grammatical category of the mentions (pronoun, common noun, proper noun) - Dependency relation of the mention's head (one-hot encoded) - Gender of the mentions (one-hot encoded) - Number (singular/plural) of the mentions (one-hot encoded) - Grammatical person of the mentions (one-hot encoded) - Additional mention pairs features (11 dimensions): - Distance between mention IDs - Distance between start tokens of mentions - Distance between end tokens of mentions - Distance between sentences containing mentions - Distance between paragraphs containing mentions - Difference in nesting levels of mentions - Ratio of shared tokens between mentions - Exact text match between mentions (binary) - Exact match of mention heads (binary) - Match of syntactic heads between mentions (binary) - Match of entity types between mentions (binary) - Hidden Layers: - Number of layers: 3 - Units per layer: 1,900 nodes - Activation function: relu - Dropout rate: 0.6 - Final Layer: - Type: Linear - Input: 1900 dimensions - Output: 1 dimension (mention pair coreference score) Model Output: Continuous prediction between 0 (not coreferent) and 1 (coreferent) indicating the degree of confidence. ## HOW TO USE: *** IN CONSTRUCTION *** ## TRAINING CORPUS: | | Document | Tokens Count | Is included in model eval | |----|----------------------------------------------------------------|----------------|------------------------------------| | 0 | 1836_Gautier-Theophile_La-morte-amoureuse | 14,299 tokens | **True** | | 1 | 1840_Sand-George_Pauline | 12,315 tokens | **True** | | 2 | 1842_Balzac-Honore-de_La-Maison-du-chat-qui-pelote | 24,776 tokens | **True** | | 3 | 1844_Balzac-Honore-de_La-Maison-Nucingen | 30,987 tokens | **True** | | 4 | 1844_Balzac-Honore-de_Sarrasine | 15,408 tokens | **True** | | 5 | 1856_Cousin-Victor_Madame-de-Hautefort | 11,768 tokens | **True** | | 6 | 1863_Gautier-Theophile_Le-capitaine-Fracasse | 11,834 tokens | **True** | | 7 | 1873_Zola-Emile_Le-ventre-de-Paris | 12,557 tokens | **True** | | 8 | 1881_Flaubert-Gustave_Bouvard-et-Pecuchet | 12,281 tokens | **True** | | 9 | 1882_Guy-de-Maupassant_Mademoiselle-Fifi-1_1-MADEMOISELLE-FIFI | 5,425 tokens | **True** | | 10 | 1882_Guy-de-Maupassant_Mademoiselle-Fifi-1_2-MADAME-BAPTISTE | 2,554 tokens | **True** | | 11 | 1882_Guy-de-Maupassant_Mademoiselle-Fifi-1_3-LA-ROUILLE | 2,929 tokens | **True** | | 12 | 1882_Guy-de-Maupassant_Mademoiselle-Fifi-2_1-MARROCA | 4,067 tokens | **True** | | 13 | 1882_Guy-de-Maupassant_Mademoiselle-Fifi-2_2-LA-BUCHE | 2,251 tokens | **True** | | 14 | 1882_Guy-de-Maupassant_Mademoiselle-Fifi-2_3-LA-RELIQUE | 2,034 tokens | **True** | | 15 | 1882_Guy-de-Maupassant_Mademoiselle-Fifi-3_1-FOU | 1,864 tokens | **True** | | 16 | 1882_Guy-de-Maupassant_Mademoiselle-Fifi-3_2-REVEIL | 2,141 tokens | **True** | | 17 | 1882_Guy-de-Maupassant_Mademoiselle-Fifi-3_3-UNE-RUSE | 2,441 tokens | **True** | | 18 | 1882_Guy-de-Maupassant_Mademoiselle-Fifi-3_4-A-CHEVAL | 2,860 tokens | **True** | | 19 | 1882_Guy-de-Maupassant_Mademoiselle-Fifi-3_5-UN-REVEILLON | 2,343 tokens | **True** | | 20 | 1901_Lucie-Achard_Rosalie-de-Constant-sa-famille-et-ses-amis | 12,703 tokens | **True** | | 21 | 1903_Conan-Laure_Elisabeth_Seton | 13,023 tokens | **True** | | 22 | 1904_Rolland-Romain_Jean-Christophe_Tome-I-L-aube | 10,982 tokens | **True** | | 23 | 1904_Rolland-Romain_Jean-Christophe_Tome-II-Le-matin | 10,305 tokens | **True** | | 24 | 1917_Adรจle-Bourgeois_Nรฉmoville | 12,389 tokens | **True** | | 25 | 1923_Radiguet-Raymond_Le-diable-au-corps | 14,637 tokens | **True** | | 26 | 1926_Audoux-Marguerite_De-la-ville-au-moulin | 11,902 tokens | **True** | | 27 | 1937_Audoux-Marguerite_Douce-Lumiere | 12,285 tokens | **True** | | 28 | Manon_Lescaut_PEDRO | 71,219 tokens | **True** | | 29 | TOTAL | 346,579 tokens | 29 files used for cross-validation | ## CONTACT: mail: antoine [dot] bourgois [at] protonmail [dot] com
dgiang02/GRPO_Qwen25_15B_16_005_2000kmap
dgiang02
2025-05-04T16:07:13Z
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "unsloth", "trl", "grpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T16:06:36Z
--- library_name: transformers tags: - unsloth - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF
mradermacher
2025-05-04T16:06:10Z
14
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:TareksGraveyard/Progenitor-V1.2-LLaMa-70B", "base_model:quantized:TareksGraveyard/Progenitor-V1.2-LLaMa-70B", "license:llama3.3", "endpoints_compatible", "region:us", "conversational" ]
null
2025-01-29T15:10:25Z
--- base_model: TareksGraveyard/Progenitor-V1.2-LLaMa-70B language: - en library_name: transformers license: llama3.3 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/TareksGraveyard/Progenitor-V1.2-LLaMa-70B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.Q5_K_M.gguf) | Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Progenitor-V1.2-LLaMa-70B-GGUF/resolve/main/Progenitor-V1.2-LLaMa-70B.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
carowagner/classify-questions-1B
carowagner
2025-05-04T16:05:19Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-04T16:04:23Z
--- library_name: transformers tags: - autotrain - text-classification base_model: google-bert/bert-base-uncased widget: - text: "I love AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.16687506437301636 f1_macro: 0.9044444444444445 f1_micro: 0.96 f1_weighted: 0.9579619047619048 precision_macro: 0.9844961240310077 precision_micro: 0.96 precision_weighted: 0.9618604651162791 recall_macro: 0.8452380952380952 recall_micro: 0.96 recall_weighted: 0.96 accuracy: 0.96
TOMFORD79/Fly62
TOMFORD79
2025-05-04T16:00:07Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-04T13:15:43Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
ma921/phi-2-sft-golden-hh
ma921
2025-05-04T15:58:06Z
0
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:finetune:microsoft/phi-2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T15:54:35Z
--- library_name: transformers license: mit base_model: microsoft/phi-2 tags: - generated_from_trainer model-index: - name: phi-2-sft-golden-hh results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-sft-golden-hh This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
phospho-app/Gr00t_simple_pawn_move_v3_500-rlgkmw3sk5
phospho-app
2025-05-04T15:54:36Z
0
0
null
[ "phosphobot", "gr00t", "region:us" ]
null
2025-05-04T15:46:38Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` Traceback (most recent call last): File "/root/src/helper.py", line 224, in predict raise RuntimeError(error_msg) RuntimeError: Training process failed with exit code 1: ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/transformers/modeling_flash_attention_utils.py", line 296, in _flash_attention_forward attn_output = flash_attn_func( ^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/flash_attn/flash_attn_interface.py", line 1107, in flash_attn_func def flash_attn_func( KeyboardInterrupt 78%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š | 463/595 [06:01<01:42, 1.28it/s] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/root/src/helper.py", line 226, in predict raise RuntimeError(e) RuntimeError: Training process failed with exit code 1: ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/transformers/modeling_flash_attention_utils.py", line 296, in _flash_attention_forward attn_output = flash_attn_func( ^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/flash_attn/flash_attn_interface.py", line 1107, in flash_attn_func def flash_attn_func( KeyboardInterrupt 78%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Š | 463/595 [06:01<01:42, 1.28it/s] ``` ## Training parameters: - **Dataset**: [dopaul/simple_pawn_move_v3](https://huggingface.co/datasets/dopaul/simple_pawn_move_v3) - **Wandb run URL**: None - **Epochs**: 5 - **Batch size**: 64 - **Training steps**: 594 ๐Ÿ“– **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline) ๐Ÿค– **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
shubhamprshr/Qwen2.5-3B-Instruct_blocksworld6_sgrpo_balanced_0.5_0.5_True_1200
shubhamprshr
2025-05-04T15:53:13Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "dataset:blocksworld-dataset", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T01:12:30Z
--- base_model: Qwen/Qwen2.5-3B-Instruct datasets: blocksworld-dataset library_name: transformers model_name: Qwen2.5-3B-Instruct_blocksworld6_sgrpo_balanced_0.5_0.5_True_1200 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Qwen2.5-3B-Instruct_blocksworld6_sgrpo_balanced_0.5_0.5_True_1200 This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [blocksworld-dataset](https://huggingface.co/datasets/blocksworld-dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="shubhamprshr/Qwen2.5-3B-Instruct_blocksworld6_sgrpo_balanced_0.5_0.5_True_1200", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shubhamprshr27-tamu/BW2/runs/hkzwevwu) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.14.0 - Transformers: 4.48.1 - Pytorch: 2.5.1 - Datasets: 3.1.0 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
thuanan/Llama-3.2-1B-Instruct-Chat-sft
thuanan
2025-05-04T15:53:04Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T15:48:52Z
--- base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** thuanan - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
infogeo/c7ed7350-031e-45e1-a8bf-2b5ecfa5a39e
infogeo
2025-05-04T15:47:43Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060", "base_model:adapter:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T15:43:26Z
--- library_name: peft base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060 tags: - axolotl - generated_from_trainer model-index: - name: c7ed7350-031e-45e1-a8bf-2b5ecfa5a39e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060 bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 5b937aaaaa1b4833_train_data.json ds_type: json format: custom path: /workspace/input_data/5b937aaaaa1b4833_train_data.json type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: infogeo/c7ed7350-031e-45e1-a8bf-2b5ecfa5a39e hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/5b937aaaaa1b4833_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 89c6d633-5ae5-4ed8-aed8-e6cc264c27ff wandb_project: s56-28 wandb_run: your_name wandb_runid: 89c6d633-5ae5-4ed8-aed8-e6cc264c27ff warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # c7ed7350-031e-45e1-a8bf-2b5ecfa5a39e This model is a fine-tuned version of [The-matt/llama2_ko-7b_distinctive-snowflake-182_1060](https://huggingface.co/The-matt/llama2_ko-7b_distinctive-snowflake-182_1060) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5276 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.5014 | 0.1403 | 150 | 1.5276 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
RazzzHF/razzzModels
RazzzHF
2025-05-04T15:46:42Z
0
4
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
2023-08-12T02:28:15Z
--- license: cc-by-nc-sa-4.0 ---
ivangrapher/52299cbd-52c3-4da8-ba0a-02751db70178
ivangrapher
2025-05-04T15:46:02Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/llama-2-7b", "base_model:adapter:unsloth/llama-2-7b", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-04T15:27:17Z
--- library_name: peft license: apache-2.0 base_model: unsloth/llama-2-7b tags: - axolotl - generated_from_trainer model-index: - name: 52299cbd-52c3-4da8-ba0a-02751db70178 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/llama-2-7b bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 92a4ab705f6ca41d_train_data.json ds_type: json format: custom path: /workspace/input_data/92a4ab705f6ca41d_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: ivangrapher/52299cbd-52c3-4da8-ba0a-02751db70178 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/92a4ab705f6ca41d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 233f72d1-e3b6-4877-90aa-2582c4f49bbb wandb_project: s56-7 wandb_run: your_name wandb_runid: 233f72d1-e3b6-4877-90aa-2582c4f49bbb warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 52299cbd-52c3-4da8-ba0a-02751db70178 This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7977 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.817 | 0.1403 | 150 | 0.7977 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
apriasmoro/45d38f63-6247-4d5b-8a83-b96a586e89ea
apriasmoro
2025-05-04T15:44:26Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:finetune:MLP-KTLim/llama-3-Korean-Bllossom-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T15:41:30Z
--- base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B library_name: transformers model_name: 45d38f63-6247-4d5b-8a83-b96a586e89ea tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 45d38f63-6247-4d5b-8a83-b96a586e89ea This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="apriasmoro/45d38f63-6247-4d5b-8a83-b96a586e89ea", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/llama3_dpo/runs/ssujh1xu) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Nickybcybc/Qwen3-lora_model
Nickybcybc
2025-05-04T15:44:07Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-04T15:43:18Z
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Nickybcybc - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
zayanhugsAI/twitter_roberta_finetuned
zayanhugsAI
2025-05-04T15:43:58Z
0
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-03T22:09:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MrRobotoAI/A14
MrRobotoAI
2025-05-04T15:42:48Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:Blackroot/Llama-3-LongStory-LORA", "base_model:merge:Blackroot/Llama-3-LongStory-LORA", "base_model:MrRobotoAI/A5", "base_model:merge:MrRobotoAI/A5", "base_model:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:merge:MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K", "base_model:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:merge:MrRobotoAI/Nord-8b-Uncensored-BASE-128k", "base_model:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "base_model:merge:MrRobotoAI/Odin-v2-8b-NOVELIST-128K", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T01:22:17Z
--- base_model: - MrRobotoAI/Nord-8b-Uncensored-BASE-128k - Blackroot/Llama-3-LongStory-LORA - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K - MrRobotoAI/Odin-v2-8b-NOVELIST-128K - MrRobotoAI/A5 library_name: transformers tags: - mergekit - merge --- # merge 13,801 R This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/Nord-8b-Uncensored-BASE-128k](https://huggingface.co/MrRobotoAI/Nord-8b-Uncensored-BASE-128k) + [Blackroot/Llama-3-LongStory-LORA](https://huggingface.co/Blackroot/Llama-3-LongStory-LORA) * [MrRobotoAI/Odin-v2-8b-NOVELIST-128K](https://huggingface.co/MrRobotoAI/Odin-v2-8b-NOVELIST-128K) + [MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K](https://huggingface.co/MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K) * [MrRobotoAI/A5](https://huggingface.co/MrRobotoAI/A5) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic models: - model: MrRobotoAI/A5 parameters: weight: - filter: v_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: o_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: up_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: gate_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - filter: down_proj value: [0.8, 0.8, 0.5, 0.6, 0.7, 0.8, 0.7, 0.6, 0.5, 0.8, 0.8] - value: 2 - model: MrRobotoAI/Nord-8b-Uncensored-BASE-128k+Blackroot/Llama-3-LongStory-LORA parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 1 - model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K+MrRobotoAI/Llama-3.1-8B-Instruct-Adapter-512K parameters: weight: - filter: v_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: o_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: up_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: gate_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - filter: down_proj value: [0.1, 0.1, 0.25, 0.2, 0.15, 0.1, 0.15, 0.2, 0.25, 0.1, 0.1] - value: 0 base_model: MrRobotoAI/Odin-v2-8b-NOVELIST-128K dtype: bfloat16 ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent16_E1
fffanx
2025-05-04T15:42:45Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-04T01:10:32Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent16_E1 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent16_E1 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent16_E1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Alphatao/dc6c07f3-e309-4d05-aac1-fc85d2156cf3
Alphatao
2025-05-04T15:42:25Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:EleutherAI/pythia-1b", "base_model:finetune:EleutherAI/pythia-1b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T15:21:56Z
--- base_model: EleutherAI/pythia-1b library_name: transformers model_name: dc6c07f3-e309-4d05-aac1-fc85d2156cf3 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for dc6c07f3-e309-4d05-aac1-fc85d2156cf3 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Alphatao/dc6c07f3-e309-4d05-aac1-fc85d2156cf3", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alphatao-alphatao/Gradients-On-Demand/runs/mcdqonvy) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent15_E1
fffanx
2025-05-04T15:42:01Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-04T01:09:43Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent15_E1 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent15_E1 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent15_E1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
AdoCleanCode/Youtube8M_real_model_v4_0.8
AdoCleanCode
2025-05-04T15:40:25Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T00:24:50Z
--- library_name: transformers license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: Youtube8M_real_model_v4_0.8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Youtube8M_real_model_v4_0.8 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5185 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.5816 | 1.0 | 20534 | 0.5626 | | 0.5621 | 2.0 | 41068 | 0.5390 | | 0.5404 | 3.0 | 61602 | 0.5275 | | 0.5242 | 4.0 | 82136 | 0.5215 | | 0.51 | 5.0 | 102670 | 0.5185 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.1.2+cu121 - Datasets 2.19.1 - Tokenizers 0.20.3
ustc-community/dfine_s_coco
ustc-community
2025-05-04T15:39:50Z
532
0
transformers
[ "transformers", "safetensors", "d_fine", "object-detection", "vision", "en", "dataset:coco", "arxiv:2410.13842", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2025-02-11T14:30:13Z
--- library_name: transformers license: apache-2.0 language: - en pipeline_tag: object-detection tags: - object-detection - vision datasets: - coco --- ## D-FINE ### **Overview** The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf) This is the HF transformers implementation for D-FINE _coco -> model trained on COCO _obj365 -> model trained on Object365 _obj2coco -> model trained on Object365 and then finetuned on COCO ### **Performance** D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD). ![COCO.png](https://huggingface.co/datasets/vladislavbro/images/resolve/main/COCO.PNG) ### **How to use** ```python import torch import requests from PIL import Image from transformers import DFineForObjectDetection, AutoImageProcessor url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine_s_coco") model = DFineForObjectDetection.from_pretrained("ustc-community/dfine_s_coco") inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3) for result in results: for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]): score, label = score.item(), label_id.item() box = [round(i, 2) for i in box.tolist()] print(f"{model.config.id2label[label]}: {score:.2f} {box}") ``` ### **Training** D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 โˆ’ 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios. ### **Applications** D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments.
venkatramaraju/polyglot
venkatramaraju
2025-05-04T15:39:11Z
0
0
null
[ "region:us" ]
null
2025-05-04T15:34:31Z
# ๐Ÿ”  Polyglot Polyglot is a high-performance multilingual tokenizer, built entirely from scratch in Go, that efficiently compresses text from 10 diverse languages using the Byte-Pair Encoding (BPE) algorithm. The system supports English, Hebrew, Bengali, Vietnamese, Korean, Arabic, Russian, Thai, Chinese, and Japanese. ## ๐Ÿ“Š Metrics - **Compression Ratio:** 3.0 - **Vocabulary Size:** 40,146 - **Total Training Corpus:** 432,584,912 characters (10M sentences) ## ๐Ÿ“ˆ Benchmarking Polyglot is evaluated against five SOTA tokenizers: Tiktoken, Transformers, SentencePiece, mBERT, and XLM. A total of 100,000 unseen sentencesโ€”10,000 per language across 10 languagesโ€”were sampled from the [statmt/cc10](https://huggingface.co/datasets/statmt/cc100?p=1) dataset. For each tokenizer and language, the mean compression ratio and token fertility were computed over the corresponding 10,000 sentences. ### ๐Ÿ”„ Compression Ratio | Language | polyglot | mbert | sentencepiece | tiktoken | transformers | xlm | |----------|----------|--------|---------------|----------|--------------|------| | ar | 2.61 | 2.43 | 2.76 | 3.03 | 1.00 | 3.03 | | bn | 2.80 | 2.07 | 2.84 | 2.56 | 0.52 | 2.83 | | en | 3.75 | 3.77 | 3.90 | 4.43 | 4.21 | 3.77 | | he | 2.32 | 2.29 | 2.51 | 2.54 | 0.88 | 2.80 | | ja | 1.51 | 1.25 | 11.80 | 1.35 | 0.72 | 1.78 | | ko | 1.51 | 1.48 | 1.93 | 1.62 | 0.50 | 1.78 | | ru | 3.36 | 3.17 | 1.37 | 3.72 | 0.94 | 3.85 | | th | 2.49 | 1.45 | 6.49 | 2.30 | 0.55 | 3.22 | | vi | 2.86 | 3.13 | 1.26 | 3.20 | 1.14 | 3.42 | | zh | 1.36 | 1.04 | 5.40 | 1.32 | 0.50 | 1.51 | #### Compression Ratio Rankings | Rank | Tokenizer | Average Compression Ratio | |:----:|:--------------:|:---------------------------:| | 1 | sentencepiece | **4.03** | | 2 | xlm | **2.80** | | 3 | tiktoken | **2.61** | | 4 | polyglot | **2.46** | | 5 | mbert | **2.21** | | 6 | transformers | **1.10** | ### ๐Ÿงฉ Token Fertility | Language | polyglot | mbert | sentencepiece | tiktoken | transformers | xlm | |----------|----------|--------|--------------|----------|--------------|------| | ar | 1.96 | 2.10 | 1.85 | 1.69 | 5.10 | 1.68 | | bn | 1.84 | 2.50 | 1.82 | 2.02 | 10.01 | 1.82 | | en | 1.19 | 1.19 | 1.15 | 1.01 | 1.06 | 1.19 | | he | 2.08 | 2.10 | 1.92 | 1.90 | 5.46 | 1.72 | | ja | 1.12 | 1.35 | 0.14 | 1.25 | 2.35 | 0.95 | | ko | 1.91 | 1.95 | 1.50 | 1.79 | 5.73 | 1.63 | | ru | 1.62 | 1.72 | 3.97 | 1.46 | 5.82 | 1.42 | | th | 1.76 | 3.02 | 0.67 | 1.90 | 7.96 | 1.36 | | vi | 1.65 | 1.50 | 3.74 | 1.47 | 4.13 | 1.37 | | zh | 1.21 | 1.58 | 0.30 | 1.25 | 3.31 | 1.09 | #### Token Fertility Rankings | Rank | Tokenizer | Average Token Fertility | |:----:|:--------------:|:--------------------------:| | 1 | transformers | **5.09** | | 2 | mbert | **1.90** | | 3 | sentencepiece | **1.71** | | 4 | polyglot | **1.63** | | 5 | tiktoken | **1.57** | | 6 | xlm | **1.42** | ### ๐ŸŒ ๐Ÿ“Š Cross-Lingual Consistency A primary goal of Polyglot is to achieve uniform tokenization quality across diverse languages. The following table compares how consistently each tokenizer performs across all 10 evaluated languages. | Tokenizer | Compression Ratio ฯƒ | Token Fertility ฯƒ | Total ฯƒ | |---------------|------------------------|----------------------|-----------------------------| | xlm | 0.80 | 0.27 | 1.07 | | polyglot | 0.76 | 0.33 | 1.09 | | tiktoken | 0.97 | 0.32 | 1.29 | | mbert | 0.88 | 0.53 | 1.41 | | transformers | 1.06 | 2.48 | 3.54 | ## ๐Ÿ‹๏ธ Training - **Dataset:** The tokenizer was trained on 10M sentences from the [opus-100 dataset](https://huggingface.co/datasets/Helsinki-NLP/opus-100), with 1M sentences per language. The language set was carefully selected to incorporate a sufficiently diverse range of scripts in our training dataset. - **Training Process:** The current version has a compression ratio of 3.0. Training runs are in progress to push this to 5.0. - **Implementation:** Data aggregation and formatting were implemented in Python. The core BPE algorithm and server were written in Go. Training data was chunked and streamed from S3 for efficient processing on machines of various sizes. ## ๐Ÿš€ Deployment Deploy Polyglot locally using Docker with the following commands: ```bash # Build the Docker image docker build -t polyglot-app . # Run the container docker run -p 8080:8080 -p 3000:3000 polyglot-app ``` Navigate to [localhost:3000](http://localhost:3000/) to interface with the tool. ## ๐ŸŒ Website Visit [Polyglot's website](https://polyglot-k6h6.onrender.com/). Please note that the host instance automatically spins down during periods of inactivity which may result in delays due to cold starts. It may take upto a minute to startup. Computation speed may vary between the hosted version and local deployment, depending on your local hardware specifications and the resources allocated by Render's infrastructure. **Website** ![**Website**](images/website.png) **Local** ![**Local**](images/local.png) ## ๐Ÿ–ฅ๏ธ Frontend The `ui` directory contains an intuitive user interface that provides the following capabilities: - Text input for tokenization - Visualization of tokenized segments and their corresponding integer representations - Decoding functionality to reconstruct the original text - Real-time metrics displaying compression ratio, token-to-character counts for performance analysis, and computation times. ## โš™๏ธ Backend The backend exposes two RESTful endpoints: - **`/encode`:** Processes input text and returns the corresponding token sequence with text representations - **`/decode`:** Accepts a token sequence and reconstructs the original text ## ๐Ÿ“„ License This project is licensed under the MIT License.
ustc-community/dfine_m_obj2coco
ustc-community
2025-05-04T15:38:14Z
62
0
transformers
[ "transformers", "safetensors", "d_fine", "object-detection", "vision", "en", "dataset:coco", "arxiv:2410.13842", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2025-03-28T11:39:09Z
--- library_name: transformers license: apache-2.0 language: - en pipeline_tag: object-detection tags: - object-detection - vision datasets: - coco --- ## D-FINE ### **Overview** The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf) This is the HF transformers implementation for D-FINE _coco -> model trained on COCO _obj365 -> model trained on Object365 _obj2coco -> model trained on Object365 and then finetuned on COCO ### **Performance** D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD). ![COCO.png](https://huggingface.co/datasets/vladislavbro/images/resolve/main/COCO.PNG) ### **How to use** ```python import torch import requests from PIL import Image from transformers import DFineForObjectDetection, AutoImageProcessor url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine_m_obj2coco") model = DFineForObjectDetection.from_pretrained("ustc-community/dfine_m_obj2coco") inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3) for result in results: for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]): score, label = score.item(), label_id.item() box = [round(i, 2) for i in box.tolist()] print(f"{model.config.id2label[label]}: {score:.2f} {box}") ``` ### **Training** D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 โˆ’ 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios. ### **Applications** D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments.
ustc-community/dfine_l_obj365
ustc-community
2025-05-04T15:37:30Z
96
0
transformers
[ "transformers", "safetensors", "d_fine", "object-detection", "vision", "en", "dataset:coco", "dataset:objects365", "arxiv:2410.13842", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2025-03-28T13:00:47Z
--- library_name: transformers license: apache-2.0 language: - en pipeline_tag: object-detection tags: - object-detection - vision datasets: - coco - objects365 --- ## D-FINE ### **Overview** The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf) This is the HF transformers implementation for D-FINE _coco -> model trained on COCO _obj365 -> model trained on Object365 _obj2coco -> model trained on Object365 and then finetuned on COCO ### **Performance** D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD). ![COCO365.png](https://huggingface.co/datasets/vladislavbro/images/resolve/main/COCO365.PNG) ![COCO365-2.png](https://huggingface.co/datasets/vladislavbro/images/resolve/main/COCO365-2.PNG) ### **How to use** ```python import torch import requests from PIL import Image from transformers import DFineForObjectDetection, AutoImageProcessor url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine_l_obj365") model = DFineForObjectDetection.from_pretrained("ustc-community/dfine_l_obj365") inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3) for result in results: for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]): score, label = score.item(), label_id.item() box = [round(i, 2) for i in box.tolist()] print(f"{model.config.id2label[label]}: {score:.2f} {box}") ``` ### **Training** D-FINE is trained on COCO and Objects365 (Lin et al. [2014]) train2017 and validated on COCO + Objects365 val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 โˆ’ 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios. ### **Applications** D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments.
fffanx/Llama-3.2-1B-Instruct-GRPO-agent8_E1
fffanx
2025-05-04T15:36:55Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-04T00:40:32Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent8_E1 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent8_E1 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent8_E1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
fffanx/Llama-3.2-1B-Instruct-GRPO-agent7_E1
fffanx
2025-05-04T15:36:12Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "grpo", "dataset:grouped_dataset", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-04T00:40:02Z
--- base_model: meta-llama/Llama-3.2-1B-Instruct datasets: grouped_dataset library_name: transformers model_name: Llama-3.2-1B-Instruct-GRPO-agent7_E1 tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for Llama-3.2-1B-Instruct-GRPO-agent7_E1 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the [grouped_dataset](https://huggingface.co/datasets/grouped_dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="fffanx/Llama-3.2-1B-Instruct-GRPO-agent7_E1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
apriasmoro/da55b775-f3e0-4a9d-aadd-72666dabfccd
apriasmoro
2025-05-04T15:35:17Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:finetune:MLP-KTLim/llama-3-Korean-Bllossom-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T15:31:14Z
--- base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B library_name: transformers model_name: da55b775-f3e0-4a9d-aadd-72666dabfccd tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for da55b775-f3e0-4a9d-aadd-72666dabfccd This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="apriasmoro/da55b775-f3e0-4a9d-aadd-72666dabfccd", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/llama3_dpo/runs/glb116cq) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Jeremmmyyyyy/gemma-3-1b-Math
Jeremmmyyyyy
2025-05-04T15:34:57Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:google/gemma-3-1b-it", "base_model:finetune:google/gemma-3-1b-it", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-04T13:09:51Z
--- base_model: google/gemma-3-1b-it library_name: transformers model_name: gemma-3-1b-it tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for gemma-3-1b-it This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Jeremmmyyyyy/gemma-3-1b-it", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```