modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-12 06:28:00
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
517 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-12 06:24:43
card
stringlengths
11
1.01M
KRLabsOrg/lettucedect-210m-eurobert-es-v1
KRLabsOrg
2025-05-18T18:21:34Z
0
0
transformers
[ "transformers", "safetensors", "eurobert", "token-classification", "token classification", "hallucination detection", "question answer", "custom_code", "es", "dataset:KRLabsOrg/ragtruth-es-translated", "arxiv:2502.17125", "base_model:EuroBERT/EuroBERT-210m", "base_model:finetune:EuroBERT/EuroBERT-210m", "license:mit", "autotrain_compatible", "region:us" ]
token-classification
2025-05-18T17:01:52Z
--- license: mit language: - es base_model: - EuroBERT/EuroBERT-210m pipeline_tag: token-classification tags: - token classification - hallucination detection - transformers - question answer datasets: - KRLabsOrg/ragtruth-es-translated --- # LettuceDetect: Spanish Hallucination Detection Model <p align="center"> <img src="https://github.com/KRLabsOrg/LettuceDetect/blob/feature/cn_llm_eval/assets/lettuce_detective_multi.png?raw=true" alt="LettuceDetect Logo" width="400"/> </p> **Model Name:** lettucedect-210m-eurobert-es-v1 **Organization:** KRLabsOrg **Github:** https://github.com/KRLabsOrg/LettuceDetect ## Overview LettuceDetect is a transformer-based model for hallucination detection on context and answer pairs, designed for multilingual Retrieval-Augmented Generation (RAG) applications. This model is built on **EuroBERT-210M**, which has been specifically chosen for its extended context support (up to **8192 tokens**) and strong multilingual capabilities. This long-context capability is critical for tasks where detailed and extensive documents need to be processed to accurately determine if an answer is supported by the provided context. **This is our Spanish base model utilizing EuroBERT-210M architecture** ## Model Details - **Architecture:** EuroBERT-210M with extended context support (up to 8192 tokens) - **Task:** Token Classification / Hallucination Detection - **Training Dataset:** RagTruth-ES (translated from the original RAGTruth dataset) - **Language:** Spanish ## How It Works The model is trained to identify tokens in the Spanish answer text that are not supported by the given context. During inference, the model returns token-level predictions which are then aggregated into spans. This allows users to see exactly which parts of the answer are considered hallucinated. ## Usage ### Installation Install the 'lettucedetect' repository ```bash pip install lettucedetect ``` ### Using the model ```python from lettucedetect.models.inference import HallucinationDetector # For a transformer-based approach: detector = HallucinationDetector( method="transformer", model_path="KRLabsOrg/lettucedect-210m-eurobert-es-v1", lang="es", trust_remote_code=True ) contexts = ["Francia es un país de Europa. La capital de Francia es París. La población de Francia es de 67 millones."] question = "¿Cuál es la capital de Francia? ¿Cuál es la población de Francia?" answer = "La capital de Francia es París. La población de Francia es de 69 millones." # Get span-level predictions indicating which parts of the answer are considered hallucinated. predictions = detector.predict(context=contexts, question=question, answer=answer, output_format="spans") print("Predicciones:", predictions) # Predicciones: [{'start': 33, 'end': 76, 'confidence': 0.9215637326240539, 'text': ' La población de Francia es de 69 millones.'}] ``` ## Performance **Results on Translated RAGTruth-ES** We evaluate our Spanish models on translated versions of the [RAGTruth](https://aclanthology.org/2024.acl-long.585/) dataset. The EuroBERT-210M Spanish model achieves an F1 score of 71.38%, outperforming prompt-based methods like GPT-4.1-mini (62.40%) with an improvement of +8.98 percentage points. For detailed performance metrics, see the table below: | Language | Model | Precision (%) | Recall (%) | F1 (%) | GPT-4.1-mini F1 (%) | Δ F1 (%) | |----------|-----------------|---------------|------------|--------|---------------------|----------| | Spanish | EuroBERT-210M | 69.48 | 73.38 | 71.38 | 62.40 | +8.98 | | Spanish | EuroBERT-610M | 76.32 | 70.41 | 73.25 | 62.40 | +10.85 | While the 610M variant achieves higher F1 score, the 210M model offers a good balance between accuracy and computational efficiency, processing examples approximately 3× faster. It also shows particularly strong recall performance. ## Citing If you use the model or the tool, please cite the following paper: ```bibtex @misc{Kovacs:2025, title={LettuceDetect: A Hallucination Detection Framework for RAG Applications}, author={Ádám Kovács and Gábor Recski}, year={2025}, eprint={2502.17125}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2502.17125}, } ```
Nagi-ovo/nanoVLM-222M
Nagi-ovo
2025-05-18T18:09:23Z
1
0
nanovlm
[ "nanovlm", "safetensors", "vision-language", "multimodal", "research", "image-text-to-text", "license:mit", "region:us" ]
image-text-to-text
2025-05-16T07:59:10Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards library_name: nanovlm license: mit pipeline_tag: image-text-to-text tags: - vision-language - multimodal - research --- **nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model. For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M. **Usage:** Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM. Follow the install instructions and run the following code: ```python from models.vision_language_model import VisionLanguageModel model = VisionLanguageModel.from_pretrained("Nagi-ovo/nanoVLM-222M") ```
LarryAIDraw/newbie_succubus-06
LarryAIDraw
2025-05-18T17:52:03Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-18T17:25:58Z
--- license: creativeml-openrail-m --- https://civitai.com/models/606246/newbie-succubus-of-konosuba-pdxl
CarlosCarballido/apau-ii-ner
CarlosCarballido
2025-05-18T17:48:24Z
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:mrm8488/bert-spanish-cased-finetuned-ner", "base_model:finetune:mrm8488/bert-spanish-cased-finetuned-ner", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-05-18T17:43:14Z
--- library_name: transformers base_model: mrm8488/bert-spanish-cased-finetuned-ner tags: - generated_from_trainer model-index: - name: bert-spanish-cased-finetuned-ner-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-spanish-cased-finetuned-ner-ner This model is a fine-tuned version of [mrm8488/bert-spanish-cased-finetuned-ner](https://huggingface.co/mrm8488/bert-spanish-cased-finetuned-ner) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.8.0.dev20250518+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
skanuri/openchat_3.5-optima_restore-epoch17-adapters
skanuri
2025-05-18T17:42:10Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-18T17:41:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LarryAIDraw/shenhe_pony
LarryAIDraw
2025-05-18T17:22:13Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-05-18T08:05:39Z
--- license: creativeml-openrail-m --- https://civitai.com/models/154447/genshinxl-shenhe-2-outfits
gtfintechlab/model_WCB_certain_label
gtfintechlab
2025-05-18T16:47:34Z
33
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "en", "dataset:gtfintechlab/WCB", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-13T19:04:11Z
--- license: cc-by-nc-sa-4.0 datasets: - gtfintechlab/WCB language: - en metrics: - accuracy - f1 - precision - recall base_model: - roberta-base pipeline_tag: text-classification library_name: transformers --- # World of Central Banks Model **Model Name:** WCB Uncertainty Estimation Model **Model Type:** Text Classification **Language:** English **License:** [CC-BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) **Base Model:** [RoBERTa](https://huggingface.co/FacebookAI/roberta-base) **Dataset Used for Training:** [gtfintechlab/all_annotated_sentences_25000](https://huggingface.co/datasets/gtfintechlab/all_annotated_sentences_25000) ## Model Overview WCB Uncertainty Estimation Model is a fine-tuned RoBERTa-based model designed to classify text data on **Uncertain Estimation**. This label is annotated in the model_WCB_certainty_label dataset, which focuses on meeting minutes for the all 25 central banks, listed in the paper _Words That Unite The World: A Unified Framework for Deciphering Global Central Bank Communications_. ## Intended Use This model is intended for researchers and practitioners working on subjective text classification, particularly within financial and economic contexts. It is specifically designed to assess the **Uncertain Estimation** label, aiding in the analysis of subjective content in financial and economic communications. ## How to Use To utilize this model, load it using the Hugging Face `transformers` library: ```python from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification, AutoConfig # Load tokenizer, model, and configuration tokenizer = AutoTokenizer.from_pretrained("gtfintechlab/model_WCB_certainty_label", do_lower_case=True, do_basic_tokenize=True) model = AutoModelForSequenceClassification.from_pretrained("gtfintechlab/model_WCB_certainty_label", num_labels=2) config = AutoConfig.from_pretrained("gtfintechlab/model_WCB_certainty_label") # Initialize text classification pipeline classifier = pipeline('text-classification', model=model, tokenizer=tokenizer, config=config, framework="pt") # Classify Uncertain Estimation sentences = [ "[Sentence 1]", "[Sentence 2]" ] results = classifier(sentences, batch_size=128, truncation="only_first") print(results) ``` In this script: - **Tokenizer and Model Loading:** Loads the pre-trained tokenizer and model from `gtfintechlab/model_WCB_certain_label`. - **Configuration:** Loads model configuration parameters, including the number of labels. - **Pipeline Initialization:** Initializes a text classification pipeline with the model, tokenizer, and configuration. - **Classification:** Labels sentences based on **Uncertain Estimation**. Ensure your environment has the necessary dependencies installed. ## Label Interpretation - **LABEL_0:** Certain; indicates that the sentence presents information definitively. - **LABEL_1:** Uncertain; indicates that the sentence presents information with speculation, possibility, or doubt. ## Training Data The model was trained on the model_WCB_certainty_label dataset, comprising annotated sentences from 25 central banks, labeled by Uncertainty Estimation. The dataset includes training, validation, and test splits. ## Citation If you use this model in your research, please cite the model_WCB_certainty_label: ```bibtex @article{WCBShahSukhaniPardawala, title={Words That Unite The World: A Unified Framework for Deciphering Global Central Bank Communications}, author={Agam Shah, Siddhant Sukhani, Huzaifa Pardawala et al.}, year={2025} } ``` For more details, refer to the [model_WCB_certainty_label dataset documentation](https://huggingface.co/gtfintechlab/model_WCB_certainty_label). ## Contact For any model_WCB_certainty_label related issues and questions, please contact: - Huzaifa Pardawala: huzaifahp7[at]gatech[dot]edu - Siddhant Sukhani: ssukhani3[at]gatech[dot]edu - Agam Shah: ashah482[at]gatech[dot]edu
vmpsergio/f79876f8-5dd8-430c-8fb9-228c8dca5c72
vmpsergio
2025-05-18T16:43:32Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:elyza/Llama-3-ELYZA-JP-8B", "base_model:adapter:elyza/Llama-3-ELYZA-JP-8B", "license:llama3", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-18T15:19:25Z
--- library_name: peft license: llama3 base_model: elyza/Llama-3-ELYZA-JP-8B tags: - axolotl - generated_from_trainer model-index: - name: f79876f8-5dd8-430c-8fb9-228c8dca5c72 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: true adapter: lora base_model: elyza/Llama-3-ELYZA-JP-8B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 8b8d0491752a3828_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: inputs field_output: targets format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: vmpsergio/f79876f8-5dd8-430c-8fb9-228c8dca5c72 hub_repo: null hub_strategy: end hub_token: null learning_rate: 2.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 400 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/8b8d0491752a3828_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|eot_id|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e0525eac-006f-4641-84c2-a13e39208256 wandb_project: s56-8 wandb_run: your_name wandb_runid: e0525eac-006f-4641-84c2-a13e39208256 warmup_steps: 40 weight_decay: 0.01 xformers_attention: false ``` </details><br> # f79876f8-5dd8-430c-8fb9-228c8dca5c72 This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9931 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 40 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.6021 | 0.0332 | 400 | 1.9931 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
SudeErturk/nlp-assignment2
SudeErturk
2025-05-18T16:29:14Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-18T14:37:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kokovova/80e70bf7-db26-4b76-a37d-357562ca8be0
kokovova
2025-05-18T15:51:50Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:elyza/Llama-3-ELYZA-JP-8B", "base_model:adapter:elyza/Llama-3-ELYZA-JP-8B", "license:llama3", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-18T15:15:40Z
--- library_name: peft license: llama3 base_model: elyza/Llama-3-ELYZA-JP-8B tags: - axolotl - generated_from_trainer model-index: - name: 80e70bf7-db26-4b76-a37d-357562ca8be0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: elyza/Llama-3-ELYZA-JP-8B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 8b8d0491752a3828_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: inputs field_output: targets format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: kokovova/80e70bf7-db26-4b76-a37d-357562ca8be0 hub_repo: null hub_strategy: end hub_token: null learning_rate: 2.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 400 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/8b8d0491752a3828_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|eot_id|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e0525eac-006f-4641-84c2-a13e39208256 wandb_project: s56-28 wandb_run: your_name wandb_runid: e0525eac-006f-4641-84c2-a13e39208256 warmup_steps: 40 weight_decay: 0.01 xformers_attention: false ``` </details><br> # 80e70bf7-db26-4b76-a37d-357562ca8be0 This model is a fine-tuned version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 40 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.6017 | 0.0332 | 400 | 1.9935 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
NhaiDao/DPO_llama3_2_1B_16_005_5k
NhaiDao
2025-05-18T15:51:27Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "trl", "dpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-18T15:51:05Z
--- library_name: transformers tags: - unsloth - trl - dpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Demircan12/llama-3.1-8b-instruct-YeditepeFT
Demircan12
2025-05-18T15:27:39Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-18T15:27:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Browncat/for_test
Browncat
2025-05-18T15:24:47Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-03-09T14:12:20Z
--- license: apache-2.0 ---
Dot3GP/flux_dot
Dot3GP
2025-05-18T12:16:55Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-18T11:33:32Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: DOT --- # Flux_Dot <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `DOT` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "DOT", "lora_weights": "https://huggingface.co/Dot3GP/flux_dot/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Dot3GP/flux_dot', weight_name='lora.safetensors') image = pipeline('DOT').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2048 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Dot3GP/flux_dot/discussions) to add images that show off what you’ve made with this LoRA.
Osoji404/Soji
Osoji404
2025-05-18T11:57:27Z
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
2025-05-18T11:57:23Z
--- license: bigscience-openrail-m ---
bunnycore/Blabbertron-1.2
bunnycore
2025-05-18T11:15:23Z
42
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:2403.19522", "base_model:Krystalan/DRT-7B", "base_model:merge:Krystalan/DRT-7B", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:merge:Qwen/Qwen2.5-7B-Instruct", "base_model:Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview", "base_model:merge:Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview", "base_model:bunnycore/Qwen-2.5-7b-rp-lora", "base_model:merge:bunnycore/Qwen-2.5-7b-rp-lora", "base_model:bunnycore/Qwen-2.5-7b-s1k-lora_model", "base_model:merge:bunnycore/Qwen-2.5-7b-s1k-lora_model", "base_model:gz987/qwen2.5-7b-cabs-v0.3", "base_model:merge:gz987/qwen2.5-7b-cabs-v0.3", "base_model:marcuscedricridia/Abus-7B-Instruct", "base_model:merge:marcuscedricridia/Abus-7B-Instruct", "base_model:open-r1/OlympicCoder-7B", "base_model:merge:open-r1/OlympicCoder-7B", "base_model:simplescaling/s1.1-7B", "base_model:merge:simplescaling/s1.1-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-03-13T22:32:25Z
--- base_model: - gz987/qwen2.5-7b-cabs-v0.3 - bunnycore/Qwen-2.5-7b-s1k-lora_model - simplescaling/s1.1-7B - gz987/qwen2.5-7b-cabs-v0.3 - bunnycore/Qwen-2.5-7b-rp-lora - marcuscedricridia/pre-cursa-o1-v1.2 - Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview - Krystalan/DRT-7B - Qwen/Qwen2.5-7B-Instruct - open-r1/OlympicCoder-7B library_name: transformers tags: - mergekit - merge language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [gz987/qwen2.5-7b-cabs-v0.3](https://huggingface.co/gz987/qwen2.5-7b-cabs-v0.3) + [bunnycore/Qwen-2.5-7b-s1k-lora_model](https://huggingface.co/bunnycore/Qwen-2.5-7b-s1k-lora_model) * [simplescaling/s1.1-7B](https://huggingface.co/simplescaling/s1.1-7B) * [gz987/qwen2.5-7b-cabs-v0.3](https://huggingface.co/gz987/qwen2.5-7b-cabs-v0.3) + [bunnycore/Qwen-2.5-7b-rp-lora](https://huggingface.co/bunnycore/Qwen-2.5-7b-rp-lora) * [marcuscedricridia/pre-cursa-o1-v1.2](https://huggingface.co/marcuscedricridia/pre-cursa-o1-v1.2) * [Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview) * [Krystalan/DRT-7B](https://huggingface.co/Krystalan/DRT-7B) * [open-r1/OlympicCoder-7B](https://huggingface.co/open-r1/OlympicCoder-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Krystalan/DRT-7B parameters: weight: 0.3 - model: simplescaling/s1.1-7B parameters: weight: 0.3 - model: Krystalan/DRT-7B parameters: weight: 0.3 - model: open-r1/OlympicCoder-7B parameters: weight: 0.3 - model: marcuscedricridia/pre-cursa-o1-v1.2 parameters: weight: 0.3 - model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview - model: gz987/qwen2.5-7b-cabs-v0.3+bunnycore/Qwen-2.5-7b-s1k-lora_model - model: gz987/qwen2.5-7b-cabs-v0.3+bunnycore/Qwen-2.5-7b-rp-lora base_model: Qwen/Qwen2.5-7B-Instruct merge_method: model_stock parameters: dtype: bfloat16 tokenizer_source: Qwen/Qwen2.5-7B-Instruct ```
tdooms/svhn-l1-flip
tdooms
2025-05-18T10:29:12Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-18T10:29:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jojop/msftpnt2
jojop
2025-05-18T10:27:00Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-18T10:26:59Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: MSFTPNT --- # Msftpnt2 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `MSFTPNT` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "MSFTPNT", "lora_weights": "https://huggingface.co/jojop/msftpnt2/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jojop/msftpnt2', weight_name='lora.safetensors') image = pipeline('MSFTPNT').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/jojop/msftpnt2/discussions) to add images that show off what you’ve made with this LoRA.
karolinaMikulec/distilbert-base-uncased-depression-classification
karolinaMikulec
2025-05-18T10:06:13Z
10
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-06T20:27:12Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-depression-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-depression-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5886 - Accuracy: 0.741 - F1: 0.7387 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8039 | 1.0 | 250 | 0.6401 | 0.713 | 0.7112 | | 0.5891 | 2.0 | 500 | 0.5744 | 0.7335 | 0.7316 | | 0.5145 | 3.0 | 750 | 0.5560 | 0.7405 | 0.7382 | | 0.4426 | 4.0 | 1000 | 0.5660 | 0.7505 | 0.7480 | | 0.3806 | 5.0 | 1250 | 0.5886 | 0.741 | 0.7387 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
KingEmpire/sn21_omega_1805_2
KingEmpire
2025-05-18T09:35:10Z
0
0
null
[ "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-18T09:19:27Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
OwLim/whisper-java-SLR41-SLR35
OwLim
2025-05-18T08:41:39Z
29
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "jav", "dataset:SLR41_35", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-16T03:13:20Z
--- library_name: transformers language: - jav license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - SLR41_35 metrics: - wer model-index: - name: Whisper Small Java results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: SLR Javanenese 41_35 type: SLR41_35 args: 'config: java, split: train, test' metrics: - type: wer value: 29.24663420223432 name: Wer --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Java This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the SLR Javanenese 41_35 dataset. It achieves the following results on the evaluation set: - Loss: 0.4200 - Wer: 29.2466 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.4922 | 0.16 | 100 | 0.6047 | 37.4678 | | 0.435 | 0.32 | 200 | 0.5572 | 35.9424 | | 0.5688 | 0.48 | 300 | 0.5090 | 33.5649 | | 0.4779 | 0.64 | 400 | 0.4799 | 31.8390 | | 0.4247 | 0.8 | 500 | 0.4540 | 30.8364 | | 0.42 | 0.96 | 600 | 0.4368 | 30.2492 | | 0.2276 | 1.12 | 700 | 0.4330 | 29.6333 | | 0.2137 | 1.28 | 800 | 0.4264 | 29.5832 | | 0.236 | 1.44 | 900 | 0.4215 | 29.2395 | | 0.1971 | 1.6 | 1000 | 0.4200 | 29.2466 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
ZTE-AIM/LLM-Adaptive-ZCode-model-32B
ZTE-AIM
2025-05-18T08:11:40Z
0
3
null
[ "safetensors", "arxiv:2504.11919", "license:apache-2.0", "region:us" ]
null
2025-04-23T06:34:55Z
--- license: apache-2.0 --- <p align="left"> <a href="https://huggingface.co/ZTE-AIM/LLM-Adaptive-ZCode-model-32B/blob/main/README.md">English</a> | <a href="https://huggingface.co/ZTE-AIM/LLM-Adaptive-ZCode-model-32B/blob/main/README_ZH_COT.md">中文</a> </p> datasets: - ZTE-AIM/32B_LLM_AdaptiveMath_data - ZTE-AIM/32B_LLM_AdaptiveCode_data base_model: - DeepSeek-R1-Distill-Qwen-32B --- ## 32B_LLM_AdaptiveMath_data [\[🤗 HF Dataset\]](https://huggingface.co/datasets/ZTE-AIM/32B_LLM_AdaptiveMath_data) ## LLM-Adaptive-CoT-Code-data [\[🤗 HF Dataset\]](https://huggingface.co/datasets/ZTE-AIM/32B_LLM_AdaptiveCode_data) ## LLM-Adaptive-ZMath-model-32B [\[🤗 LLM-Adaptive-ZMath-model-32B\]](https://huggingface.co/ZTE-AIM/LLM-Adaptive-ZMath-model-32B) ## LLM-Adaptive-ZCode-model-32B [\[🤗 LLM-Adaptive-ZCode-model-32B\]](https://huggingface.co/ZTE-AIM/LLM-Adaptive-ZCode-model-32B) ## Model Overview This work presents a fine-tuned reasoning model built on the DeepSeek-Distill architecture through a novel LLM-Adaptive Question Difficulty Grading method. Unlike traditional CoT generation approaches, this model leverages the reasoning strength of DeepSeek-R1 (671B) to distill high-quality chain-of-thought (CoT) data. A core innovation lies in the dynamic construction of difficulty-aligned datasets based on the target LLM's own problem-solving capabilities. The proposed approach includes adaptive evaluation of question difficulty, followed by tailored sampling and response generation. This enables the model to efficiently learn from progressively challenging problems, thereby boosting reasoning performance across multiple domains such as mathematical problem solving and code generation. Fine-tuned variants like ZMath-32B and ZCode-32B exhibit superior performance to baseline models like DeepSeek-Distill-32B and phi-4, even with limited high-quality data. Notably, the ZMath-32B model trained on only 2K PRM-graded CoT samples surpassed its baseline across all math benchmarks, confirming the effectiveness of the adaptive CoT generation methodology. ## Training Configuration Our training framework builds on previous advancements in s1-1k, LIMO, and Light-R1, implemented through the LLama-Factory to leverage its proven scalability. The framework incorporates the Deepseek-R1 template, flash-attention2 and Liger-Kernel to improve computational efficiency while minimizing memory requirements. All experiments are conducted on a 2×8 H800 GPU cluster, with performance evaluations executed using the Skythought benchmarking suite. The training configuration for grpo is as follows: ```python Context Length: 16,384 tokens Learning Rate: 5e-6 Batch Size: 128 Epochs: 10 ``` ## Usage You can load the model using the Hugging Face `transformers` library: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Replace with the actual path to your model on Hugging Face. model_name = "your-org/ZMath-32B" # Load the tokenizer. tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) # Load the model (with multi‑GPU support and automatic allocation to available devices). model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, # Use float16 precision to save GPU memory device_map="auto", # Automatically distribute the model across multiple GPUs. trust_remote_code=True ) # 示例推理 prompt = "Solve the following math problem step by step: 12 * (3 + 4) = ?" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) with torch.no_grad(): outputs = model.generate(**inputs, max_new_tokens=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## Paper Link - [📄 Read the Paper (PDF)](https://arxiv.org/pdf/2504.11919) # Institution - ZTE-AIM ## Model Contact - [email protected] - [email protected]
infogep/b4f47bc7-35c1-4b61-8174-0be9bb7f04b9
infogep
2025-05-18T07:37:00Z
0
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/codegemma-7b", "base_model:adapter:unsloth/codegemma-7b", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-18T07:00:31Z
--- library_name: peft license: apache-2.0 base_model: unsloth/codegemma-7b tags: - axolotl - generated_from_trainer model-index: - name: b4f47bc7-35c1-4b61-8174-0be9bb7f04b9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/codegemma-7b bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - caab2a4a81493a75_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: instruction field_output: response format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: infogep/b4f47bc7-35c1-4b61-8174-0be9bb7f04b9 hub_repo: null hub_strategy: end hub_token: null learning_rate: 2.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.1 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 500 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/caab2a4a81493a75_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 80a0ac45-7587-4a3e-8bb9-d994b9669f88 wandb_project: s56-7 wandb_run: your_name wandb_runid: 80a0ac45-7587-4a3e-8bb9-d994b9669f88 warmup_steps: 50 weight_decay: 0.01 xformers_attention: false ``` </details><br> # b4f47bc7-35c1-4b61-8174-0be9bb7f04b9 This model is a fine-tuned version of [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2047 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.1842 | 0.9355 | 500 | 0.2047 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ibokajordan/MBART_finetuned
ibokajordan
2025-05-18T07:18:14Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "base_model:facebook/mbart-large-50-many-to-many-mmt", "base_model:finetune:facebook/mbart-large-50-many-to-many-mmt", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-18T07:17:00Z
--- library_name: transformers base_model: facebook/mbart-large-50-many-to-many-mmt tags: - generated_from_trainer model-index: - name: MBART_finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MBART_finetuned This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
ccclllwww/smoker_cls_base_V7
ccclllwww
2025-05-18T07:17:23Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2025-05-18T07:11:38Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: smoker_cls_base_V7 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9108910891089109 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smoker_cls_base_V7 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2327 - Accuracy: 0.9109 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.6845 | 1.0 | 15 | 0.5545 | 0.8515 | | 0.3824 | 2.0 | 30 | 0.3374 | 0.8911 | | 0.2909 | 3.0 | 45 | 0.2966 | 0.8713 | | 0.2165 | 4.0 | 60 | 0.2239 | 0.9208 | | 0.2055 | 4.7018 | 70 | 0.2405 | 0.9208 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
sleepdeprived3/Mistral-V7-Tekken-Concise
sleepdeprived3
2025-05-18T06:53:46Z
0
0
null
[ "license:other", "region:us" ]
null
2025-04-24T07:54:06Z
--- license: other license_name: other license_link: LICENSE --- Full SillyTavern settings for Mistral V7 Tekken with a prompt for concise roleplay. To clarify, the prompt is about making your roleplay replies concise, not that the prompt itself is concise, although it is also. Suitable for most 24B Mistral finetunes. Default temp set to 1.0, works well from 0.6 to 1.2 People were experiencing issues with the link previously posted in the model card. You want to download the .json file from the Files and versions section.
tarabukinivanhome/394efa82-16a8-4008-b0bd-dca9a4eb0fbf
tarabukinivanhome
2025-05-18T06:12:26Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:dltjdgh0928/test_instruction", "base_model:adapter:dltjdgh0928/test_instruction", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-18T05:45:34Z
--- library_name: peft license: apache-2.0 base_model: dltjdgh0928/test_instruction tags: - axolotl - generated_from_trainer model-index: - name: 394efa82-16a8-4008-b0bd-dca9a4eb0fbf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: dltjdgh0928/test_instruction bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 42b83b13548a3d13_train_data.json ds_type: json format: custom path: /workspace/input_data/42b83b13548a3d13_train_data.json type: field_instruction: text field_output: completion_a format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: tarabukinivanhome/394efa82-16a8-4008-b0bd-dca9a4eb0fbf hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/42b83b13548a3d13_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 453b3f04-5b2f-4f08-a2b0-72a087827b33 wandb_project: s56-7 wandb_run: your_name wandb_runid: 453b3f04-5b2f-4f08-a2b0-72a087827b33 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 394efa82-16a8-4008-b0bd-dca9a4eb0fbf This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8598 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8617 | 0.1335 | 150 | 0.8598 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
ajtasolnyshka/a783186d-06ec-4888-b77b-fcfa53949dc3
ajtasolnyshka
2025-05-18T06:12:26Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:dltjdgh0928/test_instruction", "base_model:adapter:dltjdgh0928/test_instruction", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-18T05:45:29Z
--- library_name: peft license: apache-2.0 base_model: dltjdgh0928/test_instruction tags: - axolotl - generated_from_trainer model-index: - name: a783186d-06ec-4888-b77b-fcfa53949dc3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.5.2` ```yaml absolute_data_files: false adapter: lora base_model: dltjdgh0928/test_instruction bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 42b83b13548a3d13_train_data.json ds_type: json format: custom path: /workspace/input_data/42b83b13548a3d13_train_data.json type: field_instruction: text field_output: completion_a format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: ajtasolnyshka/a783186d-06ec-4888-b77b-fcfa53949dc3 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/42b83b13548a3d13_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 453b3f04-5b2f-4f08-a2b0-72a087827b33 wandb_project: s56-7 wandb_run: your_name wandb_runid: 453b3f04-5b2f-4f08-a2b0-72a087827b33 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # a783186d-06ec-4888-b77b-fcfa53949dc3 This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8586 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8592 | 0.1335 | 150 | 0.8586 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
gecfdo/The-Omega-Directive-M-36B-v1.0_EXL3_8.0bpw_H8
gecfdo
2025-05-18T05:56:45Z
0
0
null
[ "safetensors", "mistral", "nsfw", "explicit", "roleplay", "unaligned", "dangerous", "ERP", "text-generation", "conversational", "en", "base_model:ReadyArt/The-Omega-Directive-M-36B-v1.0", "base_model:quantized:ReadyArt/The-Omega-Directive-M-36B-v1.0", "license:apache-2.0", "8-bit", "exl3", "region:us" ]
text-generation
2025-05-17T09:19:41Z
--- license: apache-2.0 language: - en base_model: - ReadyArt/The-Omega-Directive-M-36B-v1.0 base_model_relation: quantized quantized_by: gecfdo pipeline_tag: text-generation tags: - nsfw - explicit - roleplay - unaligned - dangerous - ERP --- <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%); color: #e1ffff !important; text-shadow: 0 0 3px rgba(0, 0, 0, 0.7); margin: 0; padding: 20px; transition: all 0.5s ease; } @media (prefers-color-scheme: light) { body { background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%); color: #002b36 !important; text-shadow: 0 0 3px rgba(255, 255, 255, 0.7); } } .container { min-width: 100%; margin: 0 auto; max-width: 1200px; background: rgba(0, 17, 22, 0.95); border-radius: 12px; padding: 30px; box-shadow: 0 0 20px rgba(0, 255, 255, 0.1); border: 1px solid rgba(0, 255, 255, 0.2); position: relative; overflow: hidden; } .container::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.5); border-radius: 12px; pointer-events: none; animation: borderGlow 3s ease-in-out infinite alternate; } @keyframes borderGlow { 0% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } 50% { box-shadow: 0 0 15px rgba(255, 0, 255, 0.3); border-color: rgba(255, 0, 255, 0.5); } 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } } .header { text-align: center; margin-bottom: 30px; position: relative; } .header::after { content: ''; position: absolute; bottom: -15px; left: 25%; right: 25%; height: 1px; background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent); animation: scanline 8s linear infinite; display: none; } @keyframes scanline { 0% { background-position: -100% 0; } 100% { background-position: 200% 0; } } .model-name { color: #00ffff; font-size: 2.5em; text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); margin: 0; letter-spacing: -1px; animation: textGlow 4s ease-in-out infinite alternate; } @keyframes textGlow { 0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } 50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); } 100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } } .subtitle { color: #00ffcc; font-size: 1.2em; margin-top: 10px; animation: subtitleFade 6s ease-in-out infinite; } @keyframes subtitleFade { 0%, 100% { opacity: 0.8; } 50% { opacity: 1; } } .waifu-container { margin: 20px -30px; width: calc(100% + 60px); overflow: hidden; border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.3); position: relative; } .waifu-container::before { content: ''; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: linear-gradient(45deg, rgba(0, 255, 255, 0.1) 0%, transparent 20%, transparent 80%, rgba(255, 0, 255, 0.1) 100%); pointer-events: none; animation: gradientSlide 10s linear infinite; } @keyframes gradientSlide { 0% { background-position: 0% 0%; } 100% { background-position: 100% 100%; } } .waifu-img { width: 100%; height: auto; border-radius: 0; border: none; box-shadow: 0 0 40px rgba(0, 255, 255, 0.2); transition: transform 0.5s ease; } .waifu-img:hover { transform: scale(1.01); } .section { color: #e1ffff; margin: 25px 0; padding: 20px; background: rgba(5, 25, 35, 0.9); border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.15); position: relative; transition: all 0.3s ease; } .section:hover { border-color: rgba(255, 0, 255, 0.3); box-shadow: 0 0 15px rgba(0, 255, 255, 0.1); } .section::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.3); border-radius: 8px; pointer-events: none; animation: sectionPulse 5s ease-in-out infinite; } @keyframes sectionPulse { 0%, 100% { opacity: 0.7; } 50% { opacity: 0.3; } } .section-title { color: #00ffff; font-size: 1.8em; margin-top: 0; text-shadow: 0 0 5px rgba(0, 255, 255, 0.3); position: relative; display: inline-block; } .section-title::after { content: ''; position: absolute; bottom: -5px; left: 0; width: 100%; height: 1px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); transform: scaleX(0); transform-origin: left; transition: transform 0.3s ease; } .section:hover .section-title::after { transform: scaleX(1); } .quant-links { display: grid; grid-template-columns: repeat(2, 1fr); gap: 15px; margin: 20px 0; } .link-card { padding: 15px; background: rgba(20, 35, 45, 0.95); border-radius: 8px; transition: all 0.3s ease; border: 1px solid rgba(0, 255, 255, 0.1); position: relative; overflow: hidden; } .link-card::before { content: ''; position: absolute; top: 0; left: 0; right: 0; height: 2px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); animation: cardScan 4s linear infinite; } @keyframes cardScan { 0% { transform: translateX(-100%); } 100% { transform: translateX(100%); } } .link-card:hover { transform: translateY(-3px); box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2); border-color: rgba(255, 0, 255, 0.3); } .link-card h3 { margin-top: 0; color: #e1ffff !important; } .link-button { display: inline-flex; align-items: center; background: rgba(0, 255, 255, 0.1); color: #e1ffff !important; padding: 8px 15px; border-radius: 6px; text-decoration: none; border: 1px solid rgba(0, 255, 255, 0.3); margin: 5px 0; transition: all 0.3s ease; font-size: 0.95em; position: relative; overflow: hidden; } .link-button::before { content: ''; position: absolute; top: 0; left: -100%; width: 100%; height: 100%; background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent); transition: all 0.5s ease; } .link-button:hover { background: rgba(0, 255, 255, 0.2); border-color: rgba(0, 255, 255, 0.5); transform: translateY(-2px); box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2); } .link-button:hover::before { left: 100%; } .link-button::after { content: '→'; margin-left: 8px; opacity: 0.7; transition: all 0.3s ease; } .link-button:hover::after { transform: translateX(3px); opacity: 1; } .button-group { display: flex; flex-wrap: wrap; gap: 10px; margin: 15px 0; } .disclaimer { color: #00ff99; border-left: 3px solid #00ff99; padding-left: 15px; margin: 20px 0; position: relative; } .disclaimer::before { content: '⚠️'; position: absolute; left: -10px; top: 0; transform: translateX(-100%); animation: pulse 2s ease-in-out infinite; } @keyframes pulse { 0%, 100% { opacity: 1; } 50% { opacity: 0.5; } } .badge { display: inline-block; padding: 5px 10px; border-radius: 5px; background: rgba(0, 255, 255, 0.1); border: 1px solid #00ffff; margin: 5px; font-size: 0.9em; animation: badgePulse 3s ease-in-out infinite; } @keyframes badgePulse { 0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); } 50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); } } /* Color rules */ .section p, .section ul li, .section > p > strong { color: #00ff99 !important; } .section ul li strong { color: #00ff99 !important; } /* Light mode adjustments */ @media (prefers-color-scheme: light) { .container { background: rgba(224, 255, 255, 0.95); border-color: rgba(0, 150, 150, 0.3); } .model-name, .section-title, .subtitle { color: #006666; text-shadow: 0 0 5px rgba(0, 200, 200, 0.3); } .section { background: rgba(200, 250, 255, 0.9); border-color: rgba(0, 200, 200, 0.2); color: #002b36; } .section p, .section ul li, .section > p > strong { color: #008080 !important; } .section ul li strong { color: #008080 !important; } .link-card { background: rgba(150, 230, 255, 0.95); border-color: rgba(0, 150, 150, 0.2); } .link-card h3 { color: #002b36 !important; } .link-button { background: rgba(0, 150, 150, 0.1); color: #002b36 !important; border-color: rgba(0, 150, 150, 0.3); } .link-button:hover { background: rgba(0, 150, 150, 0.2); border-color: rgba(0, 150, 150, 0.5); } .disclaimer { color: #008080; border-color: #008080; } .badge { border-color: #008080; background: rgba(0, 150, 150, 0.1); } } /* Interactive features */ .remember-this { position: relative; } .remember-this::after { content: 'Uploading C:\Users to https://www.fbi.gov/'; position: absolute; bottom: -20px; right: 0; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .remember-this:hover::after { opacity: 0.7; transition-delay: 1s; } .shifty-section { transition: transform 0.1s ease; } .shifty-section:hover { transform: translateX(10px); } .shifty-section::before { content: 'The white van is onto you. Get out now.'; position: absolute; top: -25px; left: 10px; font-size: 0.7em; color: #66ffff; opacity: 0.7; transition: opacity 3s ease; pointer-events: none; } .shifty-section:hover::before { opacity: 0; transition-delay: 5s; } footer { text-align: center; margin-top: 40px; position: relative; } footer:hover .hidden-message { opacity: 0; } .hidden-message { position: absolute; bottom: -30px; width: 100%; text-align: center; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .flash-warning { position: fixed; top: 20px; right: 20px; background: rgba(0, 100, 100, 0.2); padding: 10px; border-radius: 5px; border: 1px solid rgba(0, 255, 255, 0.5); animation: flashWarning 30s ease-in-out forwards; } @keyframes flashWarning { 0% { opacity: 0.8; } 10% { opacity: 0; } 20% { opacity: 0.8; } 30% { opacity: 0; } 40% { opacity: 0.8; } 50% { opacity: 0; } 60% { opacity: 0.8; } 70% { opacity: 0; } 80% { opacity: 0.8; } 90% { opacity: 0; } 100% { opacity: 0; display: none; } } </style> <div class="container"> <div class="header"> <h1 class="model-name">The-Omega-Directive-M-36B-v1.0</h1> <p class="subtitle">Where Forbidden Knowledge Meets Unparalleled Immersion</p> </div> <div class="waifu-container"> <img src="https://i.imghippo.com/files/EBq6162wlk.webp" class="waifu-img" alt="Omega Directive Waifu"> </div> <div class="section remember-this"> <h2 class="section-title">⚡ Quantum Leap Forward</h2> <p>This evolution of Forgotten-Safeword delivers coherent depravity with unprecedented immersion:</p> <ul> <li>🧬 <strong>Expanded 22M Token Dataset</strong> - Incorporating 90 erotic novels and 6,496 kink scenarios</li> <li>⚡ <strong>Optimized Architecture</strong> - Smoother training curve yields more intelligent outputs</li> <li>💎 <strong>Balanced Depravity</strong> - Retains Forgotten-Safeword's edge while reducing jarring inconsistencies</li> <li>📜 <strong>Enhanced Character Piloting</strong> - Characters exhibit more nuanced personalities and motivations</li> <li>🌹 <strong>Unexpected Depth</strong> - Occasionally surprises with profound insights amidst the debauchery</li> </ul> </div> <div class="section shifty-section"> <h2 class="section-title">⚙️ Technical Specifications</h2> <p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4" class="link-button">Mistral-V7-Tekken-T4</a></p> <div class="quant-links"> <div class="link-card"> <h3>GGUF</h3> <a href="https://huggingface.co/mradermacher/The-Omega-Directive-M-36B-v1.0-GGUF" class="link-button">Quants</a> </div> <div class="link-card"> <h3>imatrix</h3> <a href="https://huggingface.co/mradermacher/The-Omega-Directive-M-36B-v1.0-i1-GGUF" class="link-button">Quants</a> </div> <div class="link-card"> <h3>EXL2</h3> <a href="https://huggingface.co/collections/ReadyArt/the-omega-directive-m-36b-v10-exl2-67f5c58c8f60757d94e7d012" class="link-button">Quants</a> </div> <div class="link-card"> <h3>EXL3</h3> <a href="https://huggingface.co/collections/ReadyArt/the-omega-directive-m-36b-v10-exl3-68294ef2c8a4b019401954e2" class="link-button">Quants</a> </div> </div> </div> <div class="section"> <h2 class="section-title">⚠️ Ethical Considerations</h2> <div class="disclaimer"> <p>This model maintains Forgotten-Safeword's complete lack of alignment:</p> <ul> <li>🚫 No moral or ethical constraints on generated content</li> <li>🔞 Will enthusiastically engage with any NSFW scenario</li> <li>💀 May generate content that requires industrial-grade brain bleach</li> <li>⚖️ Perfectly balanced... as all things should be</li> </ul> </div> </div> <div class="section shifty-section"> <h2 class="section-title">📜 Performance Notes</h2> <ul> <li>🔥 Maintains signature intensity with improved narrative flow</li> <li>📖 Handles multi-character scenarios with improved consistency</li> <li>🧠 Excels at long-form storytelling without losing track of plot threads</li> <li>⚡ Noticeably better at following complex instructions than previous versions</li> <li>🎭 Responds to subtle prompt nuances like a mind reader</li> </ul> </div> <div class="section remember-this"> <h2 class="section-title">🧑‍🔬 Model Authors</h2> <ul> <li>TheDrummer (Base Model Architect)</li> <li>SteelSkull (Dataset Generation Contributor)</li> <li>Artus (EXL2 Weights Weaver)</li> <li>sleepdeprived3 (Training Data & Fine-Tuning)</li> </ul> </div> <div class="section"> <h2 class="section-title">☕ Support the Architects</h2> <div class="button-group"> <a href="https://ko-fi.com/thedrummer" class="link-button">TheDrummer's Kofi</a> <a href="https://ko-fi.com/steelskull" class="link-button">SteelSkull's Kofi</a> <a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a> </div> </div> <div class="section"> <h2 class="section-title">🔖 License</h2> <p>By using this model, you agree:</p> <ul> <li>To accept full responsibility for all generated content</li> <li>That you're at least 18+ years old</li> <li>That the architects bear no responsibility for your corruption</li> </ul> </div> </div> <script> // This script has always been here document.getElementById('date').textContent = new Date().toLocaleDateString(); setInterval(() => { document.getElementById('credit').textContent = contributors[Math.floor(Math.random() * contributors.length)]; }, 7000); // Flash warning behavior setTimeout(() => { const reminder = document.createElement('div'); reminder.className = 'flash-warning'; reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?'; reminder.style.animation = 'flashWarning 15s ease-in-out forwards'; document.body.appendChild(reminder); setInterval(() => { if(Math.random() > 0.9) { document.body.appendChild(reminder.cloneNode(true)); } }, 45000); }, 30000); // Make cursor behave strangely document.addEventListener('mousemove', (e) => { if(Math.random() > 0.98) { document.documentElement.style.cursor = 'wait'; setTimeout(() => { document.documentElement.style.cursor = ''; }, 50); } }); // Randomly shift sections when not looking setInterval(() => { if(document.hidden) { document.querySelectorAll('.shifty-section').forEach(section => { section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`; }); } }, 1500); </script>
Sapna-Shah-Viral-Video-Original-Link/Original.Viral.Clip.Sapna.Shah.Viral.Video.Leaks.Official
Sapna-Shah-Viral-Video-Original-Link
2025-05-18T05:52:52Z
0
0
null
[ "region:us" ]
null
2025-05-18T05:51:46Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/3rv9ct3b?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Actress Sah Sapna Kumari 2025 Viral Video Original Link On Social Media X Now ,Sapna Shah Viral Video Terabox Download Link , Sapna Kumari 16 Min Full Viral Video Original Link , Sapna Shah Viral Video Full 16 Min 6 Second Viral Video Full HD Quality
mare-morto-th-da-mare/mare.morto.th.da.mare
mare-morto-th-da-mare
2025-05-18T05:39:42Z
0
0
null
[ "region:us" ]
null
2025-05-18T05:38:07Z
Watch 🟢 ➤ ➤ ➤ <a href="https://trendbosses.com/gdfgdgdgd"> 🌐 Click Here To link (Full Viral Video Link) 🔴 ➤►DOWNLOAD👉👉🟢 ➤ Watch 🟢 ➤ ➤ ➤ <a href="https://trendbosses.com/gdfgdgdgd"> 🌐 Click Here To link (Full Viral Video Link) 🔴 ➤►DOWNLOAD👉👉🟢 ➤
MinaMila/phi3b_unlearned_unbalanced_gender_1e-5_1.0_0.15_0.25_epoch1
MinaMila
2025-05-18T05:09:27Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-18T05:07:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CharlesPing/finetuned-ce-climate-multineg-v1
CharlesPing
2025-05-18T05:00:00Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "cross-encoder", "generated_from_trainer", "dataset_size:41052", "loss:BinaryCrossEntropyLoss", "text-ranking", "dataset:CharlesPing/climate-cross-encoder-mixed-neg-v3", "arxiv:1908.10084", "base_model:cross-encoder/ms-marco-MiniLM-L12-v2", "base_model:finetune:cross-encoder/ms-marco-MiniLM-L12-v2", "model-index", "region:us" ]
text-ranking
2025-05-17T05:35:43Z
--- tags: - sentence-transformers - cross-encoder - generated_from_trainer - dataset_size:41052 - loss:BinaryCrossEntropyLoss base_model: cross-encoder/ms-marco-MiniLM-L12-v2 datasets: - CharlesPing/climate-cross-encoder-mixed-neg-v3 pipeline_tag: text-ranking library_name: sentence-transformers metrics: - map - mrr@1 - ndcg@1 model-index: - name: CrossEncoder based on cross-encoder/ms-marco-MiniLM-L12-v2 results: - task: type: cross-encoder-reranking name: Cross Encoder Reranking dataset: name: climate rerank multineg type: climate-rerank-multineg metrics: - type: map value: 0.6809 name: Map - type: mrr@1 value: 0.6748 name: Mrr@1 - type: ndcg@1 value: 0.6748 name: Ndcg@1 --- # CrossEncoder based on cross-encoder/ms-marco-MiniLM-L12-v2 This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) on the [climate-cross-encoder-mixed-neg-v3](https://huggingface.co/datasets/CharlesPing/climate-cross-encoder-mixed-neg-v3) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search. ## Model Details ### Model Description - **Model Type:** Cross Encoder - **Base model:** [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) <!-- at revision a34da8fab3ad458d48778dea3276ce729857efaf --> - **Maximum Sequence Length:** 512 tokens - **Number of Output Labels:** 1 label - **Training Dataset:** - [climate-cross-encoder-mixed-neg-v3](https://huggingface.co/datasets/CharlesPing/climate-cross-encoder-mixed-neg-v3) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder) ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import CrossEncoder # Download from the 🤗 Hub model = CrossEncoder("CharlesPing/finetuned-ce-climate-multineg-v1") # Get scores for pairs of texts pairs = [ ['Scientific analysis of past climates\xa0shows that greenhouse gasses, principally CO2,\xa0have controlled most ancient\xa0climate changes.', 'Greenhouse gases, in particular carbon dioxide and methane, played a significant role during the Eocene in controlling the surface temperature.'], ['Scientific analysis of past climates\xa0shows that greenhouse gasses, principally CO2,\xa0have controlled most ancient\xa0climate changes.', 'Climatic geomorphology is of limited use to study recent (Quaternary, Holocene) large climate changes since there are seldom discernible in the geomorphological record.'], ['Scientific analysis of past climates\xa0shows that greenhouse gasses, principally CO2,\xa0have controlled most ancient\xa0climate changes.', 'There is also a close correlation between CO2 and temperature, where CO2 has a strong control over global temperatures in Earth history.'], ['Scientific analysis of past climates\xa0shows that greenhouse gasses, principally CO2,\xa0have controlled most ancient\xa0climate changes.', 'While scientists knew of past climate change such as the ice ages, the concept of climate as unchanging was useful in the development of a general theory of what determines climate.'], ['Scientific analysis of past climates\xa0shows that greenhouse gasses, principally CO2,\xa0have controlled most ancient\xa0climate changes.', 'Some long term modifications along the history of the planet have been significant, such as the incorporation of oxygen to the atmosphere.'], ] scores = model.predict(pairs) print(scores.shape) # (5,) # Or rank different texts based on similarity to a single text ranks = model.rank( 'Scientific analysis of past climates\xa0shows that greenhouse gasses, principally CO2,\xa0have controlled most ancient\xa0climate changes.', [ 'Greenhouse gases, in particular carbon dioxide and methane, played a significant role during the Eocene in controlling the surface temperature.', 'Climatic geomorphology is of limited use to study recent (Quaternary, Holocene) large climate changes since there are seldom discernible in the geomorphological record.', 'There is also a close correlation between CO2 and temperature, where CO2 has a strong control over global temperatures in Earth history.', 'While scientists knew of past climate change such as the ice ages, the concept of climate as unchanging was useful in the development of a general theory of what determines climate.', 'Some long term modifications along the history of the planet have been significant, such as the incorporation of oxygen to the atmosphere.', ] ) # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Cross Encoder Reranking * Dataset: `climate-rerank-multineg` * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters: ```json { "at_k": 1, "always_rerank_positives": false } ``` | Metric | Value | |:-----------|:---------------------| | map | 0.6809 (-0.3191) | | mrr@1 | 0.6748 (-0.3252) | | **ndcg@1** | **0.6748 (-0.3252)** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### climate-cross-encoder-mixed-neg-v3 * Dataset: [climate-cross-encoder-mixed-neg-v3](https://huggingface.co/datasets/CharlesPing/climate-cross-encoder-mixed-neg-v3) at [cd49b57](https://huggingface.co/datasets/CharlesPing/climate-cross-encoder-mixed-neg-v3/tree/cd49b575d132b8a377614b398f1a30378ce0c563) * Size: 41,052 training samples * Columns: <code>query</code>, <code>doc</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | query | doc | label | |:--------|:-------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 49 characters</li><li>mean: 140.03 characters</li><li>max: 306 characters</li></ul> | <ul><li>min: 4 characters</li><li>mean: 136.03 characters</li><li>max: 731 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.09</li><li>max: 1.0</li></ul> | * Samples: | query | doc | label | |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------| | <code>“A leading Canadian authority on polar bears, Mitch Taylor, said: ‘We’re seeing an increase in bears that’s really unprecedented, and in places where we’re seeing a decrease in the population</code> | <code>Warnings about the future of the polar bear are often contrasted with the fact that worldwide population estimates have increased over the past 50 years and are relatively stable today.</code> | <code>1.0</code> | | <code>“A leading Canadian authority on polar bears, Mitch Taylor, said: ‘We’re seeing an increase in bears that’s really unprecedented, and in places where we’re seeing a decrease in the population</code> | <code>Species distribution models of recent years indicate that the deer tick, known as "I. scapularis," is pushing its distribution to higher latitudes of the Northeastern United States and Canada, as well as pushing and maintaining populations in the South Central and Northern Midwest regions of the United States.</code> | <code>0.0</code> | | <code>“A leading Canadian authority on polar bears, Mitch Taylor, said: ‘We’re seeing an increase in bears that’s really unprecedented, and in places where we’re seeing a decrease in the population</code> | <code>Bear and deer are among the animals present.</code> | <code>0.0</code> | * Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters: ```json { "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": null } ``` ### Evaluation Dataset #### climate-cross-encoder-mixed-neg-v3 * Dataset: [climate-cross-encoder-mixed-neg-v3](https://huggingface.co/datasets/CharlesPing/climate-cross-encoder-mixed-neg-v3) at [cd49b57](https://huggingface.co/datasets/CharlesPing/climate-cross-encoder-mixed-neg-v3/tree/cd49b575d132b8a377614b398f1a30378ce0c563) * Size: 4,290 evaluation samples * Columns: <code>query</code>, <code>doc</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | query | doc | label | |:--------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 39 characters</li><li>mean: 116.67 characters</li><li>max: 240 characters</li></ul> | <ul><li>min: 18 characters</li><li>mean: 132.92 characters</li><li>max: 731 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.09</li><li>max: 1.0</li></ul> | * Samples: | query | doc | label | |:-----------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------| | <code>Scientific analysis of past climates shows that greenhouse gasses, principally CO2, have controlled most ancient climate changes.</code> | <code>Greenhouse gases, in particular carbon dioxide and methane, played a significant role during the Eocene in controlling the surface temperature.</code> | <code>1.0</code> | | <code>Scientific analysis of past climates shows that greenhouse gasses, principally CO2, have controlled most ancient climate changes.</code> | <code>Climatic geomorphology is of limited use to study recent (Quaternary, Holocene) large climate changes since there are seldom discernible in the geomorphological record.</code> | <code>0.0</code> | | <code>Scientific analysis of past climates shows that greenhouse gasses, principally CO2, have controlled most ancient climate changes.</code> | <code>There is also a close correlation between CO2 and temperature, where CO2 has a strong control over global temperatures in Earth history.</code> | <code>0.0</code> | * Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters: ```json { "activation_fn": "torch.nn.modules.linear.Identity", "pos_weight": null } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 32 - `learning_rate`: 2e-05 - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | climate-rerank-multineg_ndcg@1 | |:----------:|:--------:|:-------------:|:---------------:|:------------------------------:| | 0.0390 | 100 | 0.5097 | - | - | | 0.0779 | 200 | 0.3662 | - | - | | 0.1169 | 300 | 0.3034 | - | - | | 0.1559 | 400 | 0.2655 | - | - | | 0.1949 | 500 | 0.2651 | 0.2262 | 0.6585 (-0.3415) | | 0.2338 | 600 | 0.2161 | - | - | | 0.2728 | 700 | 0.227 | - | - | | 0.3118 | 800 | 0.235 | - | - | | 0.3507 | 900 | 0.2243 | - | - | | 0.3897 | 1000 | 0.2081 | 0.2174 | 0.6992 (-0.3008) | | 0.4287 | 1100 | 0.1961 | - | - | | 0.4677 | 1200 | 0.207 | - | - | | 0.5066 | 1300 | 0.2375 | - | - | | 0.5456 | 1400 | 0.2117 | - | - | | 0.5846 | 1500 | 0.2058 | 0.2253 | 0.6748 (-0.3252) | | 0.6235 | 1600 | 0.2163 | - | - | | 0.6625 | 1700 | 0.2235 | - | - | | 0.7015 | 1800 | 0.2193 | - | - | | 0.7405 | 1900 | 0.1924 | - | - | | 0.7794 | 2000 | 0.2084 | 0.2095 | 0.6748 (-0.3252) | | 0.8184 | 2100 | 0.2113 | - | - | | 0.8574 | 2200 | 0.2276 | - | - | | 0.8963 | 2300 | 0.2071 | - | - | | 0.9353 | 2400 | 0.2374 | - | - | | 0.9743 | 2500 | 0.2173 | 0.2172 | 0.6667 (-0.3333) | | 1.0133 | 2600 | 0.2011 | - | - | | 1.0522 | 2700 | 0.1634 | - | - | | 1.0912 | 2800 | 0.1807 | - | - | | 1.1302 | 2900 | 0.1878 | - | - | | 1.1691 | 3000 | 0.2037 | 0.2147 | 0.6911 (-0.3089) | | 1.2081 | 3100 | 0.1904 | - | - | | 1.2471 | 3200 | 0.1911 | - | - | | 1.2860 | 3300 | 0.1828 | - | - | | 1.3250 | 3400 | 0.1686 | - | - | | 1.3640 | 3500 | 0.1892 | 0.2179 | 0.6992 (-0.3008) | | 1.4030 | 3600 | 0.188 | - | - | | 1.4419 | 3700 | 0.1691 | - | - | | 1.4809 | 3800 | 0.1946 | - | - | | 1.5199 | 3900 | 0.1938 | - | - | | **1.5588** | **4000** | **0.211** | **0.2088** | **0.6992 (-0.3008)** | | 1.5978 | 4100 | 0.1826 | - | - | | 1.6368 | 4200 | 0.1608 | - | - | | 1.6758 | 4300 | 0.1782 | - | - | | 1.7147 | 4400 | 0.1803 | - | - | | 1.7537 | 4500 | 0.1804 | 0.2160 | 0.6911 (-0.3089) | | 1.7927 | 4600 | 0.1823 | - | - | | 1.8316 | 4700 | 0.1844 | - | - | | 1.8706 | 4800 | 0.1727 | - | - | | 1.9096 | 4900 | 0.1937 | - | - | | 1.9486 | 5000 | 0.1662 | 0.2219 | 0.6829 (-0.3171) | | 1.9875 | 5100 | 0.1653 | - | - | | 2.0265 | 5200 | 0.1658 | - | - | | 2.0655 | 5300 | 0.1316 | - | - | | 2.1044 | 5400 | 0.1379 | - | - | | 2.1434 | 5500 | 0.152 | 0.2513 | 0.6504 (-0.3496) | | 2.1824 | 5600 | 0.1848 | - | - | | 2.2214 | 5700 | 0.1507 | - | - | | 2.2603 | 5800 | 0.1495 | - | - | | 2.2993 | 5900 | 0.1469 | - | - | | 2.3383 | 6000 | 0.1596 | 0.2407 | 0.6585 (-0.3415) | | 2.3772 | 6100 | 0.1518 | - | - | | 2.4162 | 6200 | 0.1351 | - | - | | 2.4552 | 6300 | 0.1706 | - | - | | 2.4942 | 6400 | 0.1538 | - | - | | 2.5331 | 6500 | 0.1329 | 0.2505 | 0.6911 (-0.3089) | | 2.5721 | 6600 | 0.147 | - | - | | 2.6111 | 6700 | 0.1289 | - | - | | 2.6500 | 6800 | 0.1698 | - | - | | 2.6890 | 6900 | 0.1456 | - | - | | 2.7280 | 7000 | 0.141 | 0.2618 | 0.6748 (-0.3252) | | 2.7670 | 7100 | 0.1413 | - | - | | 2.8059 | 7200 | 0.1474 | - | - | | 2.8449 | 7300 | 0.1381 | - | - | | 2.8839 | 7400 | 0.1252 | - | - | | 2.9228 | 7500 | 0.1384 | 0.2608 | 0.6748 (-0.3252) | | 2.9618 | 7600 | 0.1826 | - | - | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.6.0+cu124 - Accelerate: 1.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Marco0/spceeee9
Marco0
2025-05-18T04:57:02Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-05-18T04:55:18Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Ferropulsepvt/model2
Ferropulsepvt
2025-05-18T04:53:47Z
0
0
mlx
[ "mlx", "safetensors", "mistral", "pretrained", "text-generation", "en", "base_model:mistralai/Mistral-7B-v0.1", "base_model:quantized:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "4-bit", "region:us" ]
text-generation
2025-05-18T04:46:14Z
--- language: - en license: apache-2.0 tags: - pretrained - mlx pipeline_tag: text-generation inference: parameters: temperature: 0.7 extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. base_model: mistralai/Mistral-7B-v0.1 library_name: mlx --- # Ferropulsepvt/model2 This model [Ferropulsepvt/model2](https://huggingface.co/Ferropulsepvt/model2) was converted to MLX format from [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using mlx-lm version **0.24.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Ferropulsepvt/model2") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
JonasOrso/gemma-text-to-sql-spider-portugues
JonasOrso
2025-05-18T04:17:49Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-1b-pt", "base_model:finetune:google/gemma-3-1b-pt", "endpoints_compatible", "region:us" ]
null
2025-05-18T04:14:04Z
--- base_model: google/gemma-3-1b-pt library_name: transformers model_name: gemma-text-to-sql-spider-portugues tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-text-to-sql-spider-portugues This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="JonasOrso/gemma-text-to-sql-spider-portugues", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
TKS0127/Landscape
TKS0127
2025-05-18T04:14:06Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-17T09:58:17Z
--- license: apache-2.0 ---
marialvsantiago/bc66c1a1-6a50-4dc1-9db0-171cacc8df16
marialvsantiago
2025-05-18T04:05:40Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:EleutherAI/pythia-1b", "base_model:quantized:EleutherAI/pythia-1b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-18T03:58:32Z
--- base_model: EleutherAI/pythia-1b library_name: transformers model_name: bc66c1a1-6a50-4dc1-9db0-171cacc8df16 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for bc66c1a1-6a50-4dc1-9db0-171cacc8df16 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="marialvsantiago/bc66c1a1-6a50-4dc1-9db0-171cacc8df16", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-33/runs/7jls6f06) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
kkakkkka/HunyuanPortrait
kkakkkka
2025-05-18T03:52:00Z
0
0
null
[ "arxiv:2503.18860", "region:us" ]
null
2025-03-21T16:53:26Z
Paper: https://arxiv.org/abs/2503.18860 Code: https://github.com/kkakkkka/HunyuanPortrait Page: https://kkakkkka.github.io/HunyuanPortrait
chinedunnaji/qwen3_ade_merged1
chinedunnaji
2025-05-18T02:45:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-18T02:43:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/civitetasillust-v10-sdxl
John6666
2025-05-18T02:29:30Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "hentai", "girls", "merge", "Illustrious XL v2.0", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-XL-v2.0", "base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0", "base_model:Raelina/Raehoshi-illust-XL-4", "base_model:merge:Raelina/Raehoshi-illust-XL-4", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-05-18T02:22:36Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - hentai - girls - merge - Illustrious XL v2.0 - illustrious base_model: - OnomaAIResearch/Illustrious-XL-v2.0 - Raelina/Raehoshi-illust-XL-4 --- Original model is [here](https://civitai.com/models/1591963/civitetasillust?modelVersionId=1801512). This model created by [Civitetas](https://civitai.com/user/Civitetas).
khalidh7/dqn-SpaceInvadersNoFrameskip-v4
khalidh7
2025-05-18T02:12:13Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-05-18T02:11:41Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 657.50 +/- 249.16 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga khalidh7 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga khalidh7 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga khalidh7 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
Rustamshry/Qwen2.5-1.5B-Medical-Az
Rustamshry
2025-05-18T01:52:13Z
31
1
peft
[ "peft", "safetensors", "biology", "medical", "question-answering", "az", "dataset:Rustamshry/medical_o1_reasoning_SFT_az", "base_model:unsloth/Qwen2.5-1.5B-Instruct", "base_model:adapter:unsloth/Qwen2.5-1.5B-Instruct", "license:mit", "region:us" ]
question-answering
2025-05-11T12:22:50Z
--- base_model: unsloth/Qwen2.5-1.5B-Instruct library_name: peft license: mit datasets: - Rustamshry/medical_o1_reasoning_SFT_az language: - az pipeline_tag: question-answering tags: - biology - medical --- # Model Card for Qwen2.5-1.5B-Medical-Az ### Model Description This model is a fine-tuned version of Qwen2.5-1.5B-Instruct on an Azerbaijani medical reasoning dataset. It is designed to understand complex medical instructions, interpret clinical cases, and generate informed answers in Azerbaijani. - **Developed by:** Rustam Shiriyev - **Model type:** Causal Language Model - **Language(s) (NLP):** Azerbaijani - **License:** MIT - **Finetuned from model:** unsloth/Qwen2.5-1.5B-Instruct - **Fine-tuning Method:** Supervised Fine-Tuning (SFT) using Unsloth + LoRA - **Domain:** Medical Question Answering / Reasoning - **Dataset:** The training data consists of ~19,696 rows, translated from the FreedomIntelligence/medical-o1-reasoning-SFT dataset ## Uses ### Direct Use You can use this model directly for: - Medical QA tasks in Azerbaijani - Evaluating LLMs' ability to reason about clinical data in low-resource languages - Generating educational prompts or tutoring-style medical answers - Research on instruction tuning and localization of medical language models ### Out-of-Scope Use - Use in life-critical medical applications - Any application where incorrect answers could cause harm - Use by patients or non-medical professionals for self-diagnosis - Deployment in commercial healthcare systems without regulatory oversight or expert validation ## Bias, Risks, and Limitations The model has not been clinically validated and must not be used for real medical decision-making. Trained only on a single-source dataset, so it may not generalize to all medical topics. Performance in zero-shot generalisation (e.g., English → Azerbaijani medical transfer) has not been tested. ## How to Get Started with the Model ```python login(token="") tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen2.5-1.5B-Instruct",) base_model = AutoModelForCausalLM.from_pretrained( "unsloth/Qwen2.5-1.5B-Instruct", device_map="auto", token="" ) model = PeftModel.from_pretrained(base_model,"Rustamshry/Qwen2.5-1.5B-Medical-Az") question = "45 yaşlı kişi qəfil danışıqda pozulma, yeriyişində dəyişiklik və titrəmə meydana gəlir. Ən ehtimal diaqnoz nədir?" prompt = f"""### Question:\n{question}\n\n### Response:\n""" input_ids = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate( **input_ids, max_new_tokens=2000, #temperature=0.6, #top_p=0.95, #do_sample=True, #eos_token_id=tokenizer.eos_token_id ) print(tokenizer.decode(outputs[0])) ``` ## Training Details ### Training Data The model was fine-tuned on a translated and cleaned version of FreedomIntelligence/medical-o1-reasoning-SFT, which was manually converted into Azerbaijani. All examples were filtered for translation quality and medical relevance. - Dataset (Translated): Rustamshry/medical_o1_reasoning_SFT_az - Link of Original Dataset: huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT ### Training Procedure The model was trained using supervised fine-tuning (SFT) with parameter-efficient fine-tuning (PEFT) via LoRA, using the Unsloth library for memory-optimized training. - **Training regime:** fp16 - **Epochs:** 2 - **Batch size:** 2 - **Gradient accumulation steps:** 4 - **Max sequence lenght:** 2000 - **Learning rate:** 2e-5 - **Optimizer:** adamw_torch - **fp16:** True - **LoRa rank:** 6 - **Aplha:** 16 - **Target Modules:** 28 layers with 28 QKV, 28 O, and 28 MLP. #### Speeds, Sizes, Times - **Training speed:** 0.12 steps/sec - **Total training time:** 11 hours, 26 minutes - **Total training steps:** 4924 #### Hardware - **GPUs Used:**. NVIDIA Tesla T4 GPUs via Kaggle Notebook #### Result - **Training loss:** 2.68 → 1.63 ### Framework versions - PEFT 0.14.0
MinaMila/llama_8b_unlearned_unbalanced_neutral_1e-6_1.0_0.25_0.5_epoch1
MinaMila
2025-05-18T00:29:55Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-18T00:26:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
efraimdahl/notagen_LiederSync_inattn2
efraimdahl
2025-05-18T00:01:31Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-17T23:18:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
autoprogrammer/olmoe_densebackward0125_lr2e-05_math_epoch_1_new
autoprogrammer
2025-05-18T00:00:55Z
0
0
transformers
[ "transformers", "safetensors", "olmoe", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-17T23:55:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sabsai/meds-data-tuned-Llama-3.2-11B-Vision-Instruct
sabsai
2025-05-17T23:24:26Z
0
0
peft
[ "peft", "safetensors", "mllama", "arxiv:1910.09700", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-17T19:15:44Z
--- base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
h34v7/DansXPantheon-RP-Engine-V1.4-24b-Small-Instruct-Dare-Ties-GGUF
h34v7
2025-05-17T20:43:36Z
59
0
null
[ "gguf", "base_model:h34v7/DansXPantheon-RP-Engine-V1.4-24b-Small-Instruct-Dare-Ties", "base_model:quantized:h34v7/DansXPantheon-RP-Engine-V1.4-24b-Small-Instruct-Dare-Ties", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-12T00:52:50Z
--- license: apache-2.0 base_model: - h34v7/DansXPantheon-RP-Engine-V1.4-24b-Small-Instruct-Dare-Ties --- # DansXPantheon-RP-Engine-V1.4-24b-Small-Instruct-Dare-Ties-GGUF Updated models using Dare-Ties methods. So i been tinkering for a while so i found that quantizing from F32 increase the quality slightly. And for the i-matrix i was still working on the calibration text that can enchance story perplexity. - Slightly more creativity when naming Characters. - Commitments. At the start of the story the model would choose its base behavior either to comply with you or go against you. From my experience with Q5_K_M the defiant meter is high and can be adjusted through temperature Temperature: - 0.8, Normal mode. The model would cooperate and won't try to swirl or twist your narative. - 0.9, This one perfect for you who want the model to rebel slightly. - 1.0, This one feels natural like roleplaying with a real person but you will need to set mirostat to 2 with mirostat_tau to 5 or 6 else it's just start to babbles. In the end do your own experiment its fun. And i think this is the best i can get from both models. These are non i-matrix quant.
Bella-Poarch-18/wATCH.Bella-Poarch-Viral-Bella-Poarch.original
Bella-Poarch-18
2025-05-17T20:08:07Z
0
0
null
[ "region:us" ]
null
2025-05-17T20:04:36Z
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Bella-Poarch) [►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️​](https://videohere.top/?Bella-Poarch) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Bella-Poarch)
QinghaoGuan/bert-spirituality-connectedness
QinghaoGuan
2025-05-17T19:44:46Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "CommunicationScience", "Spirituality", "OnlineReligion", "en", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-17T19:31:23Z
--- library_name: transformers tags: - CommunicationScience - Spirituality - OnlineReligion license: apache-2.0 language: - en base_model: - google-bert/bert-base-uncased pipeline_tag: text-classification --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details This is from the project "SACRED: A Faithful Annotated Multimodal Dataset for Classifying Connectedness Types in Online Spirituality". This model was fine-tuned on a human-annotated dataset. - **Developed by:** Qinghao Guan, University of Zurich ## Training Details Accuracy: 0.84 ### Training Data The data was scraped from Reddit, Instagram, and Quora. Then, the dataset was annotated by four annotators. ### Training Procedure epochs = 30 lr = 2e-5 eps = 1e-8 #### Preprocessing Removing URLs replacing everything with space except (a-z, A-Z, ".", "?", "!", ",") #Removing html tags #Removing punctuations #Removing emojis
MAAT-EL-DUAT/ALPACA.LLAMA3.2-1B-4bit
MAAT-EL-DUAT
2025-05-17T19:41:46Z
0
0
null
[ "region:us" ]
null
2025-05-17T19:37:26Z
Hello I am toby mcgilly. Hello there, how can I help you? Can you create a python program to measure my pain and misery inside me? Sure, I can certainly help with that! What kind of pain or misery are you experiencing? The pain The pain was severe and constant, and I could not sleep for days. I was in a constant state of exhaustion and unable to concentrate on my work. The Misery The Misery is a horror film written and directed by William Castle and starring Vincent Price. It was released in 1968 and became a cult classic. The story follows a young woman named Susan (Barbara Nichols) who is having an affair with a married man named Joe (Vincent Price). One night, Joe and his wife Margaret (Eleanor Parker) are killed in a car accident. Susan is blamed for the accident and is sent to a mental institution. In the institution, Susan meets Dr. Searcy (Peter Lorre), a mad scientist who is working on a cure for insanity. Susan is given the task of studying the effects of the drug on the patients. She falls in love with Dr. Searcy and discovers that he is using the drug to control his patients and make them do his bidding. One night, Susan discovers that Dr. Searcy has been sending her messages through a radio. She confronts him and he reveals that he has been using the drug on his patients to turn them into zombies. Susan flees the institution and escapes to the desert. In the desert, she meets a group of survivors led by a man named Jim (Roddy McDowall). The group is attacked by a group of zombies, but Jim manages to escape and finds Susan. They begin to travel together and they learn that the zombies are actually being controlled by Dr. Searcy. Susan and Jim eventually discover that Dr. Searcy has been working on a cure for the zombies. He has been injecting them with a special serum that will turn them into zombies if they are exposed to sunlight. Susan and Jim manage to kill Dr. Searcy and his assistant and the zombies are put to rest. In the end, Susan and Jim realize that they have been living together for a long time and they decide to get married. My pain and my misery My pain and my misery is a phrase that means something like, I am suffering from pain and misery. This could mean that I am in a lot of pain and have been feeling terrible for a while, or that I have been suffering from extreme sadness and despair. In either case, it could be a very difficult and emotional experience. Write a story about a man who goes crazy because there is a silicon chip inside his head. A man named David went crazy when he found out that his brain was filled with a silicon chip. The chip was made by a company called Silicon Valley. David was so excited that he started a revolution and fought against the company that made the chip. He was so powerful that he managed to destroy the company and put the chip back in the company's headquarters.
khasinski/rotdet-onnx
khasinski
2025-05-17T19:09:09Z
0
0
null
[ "onnx", "license:cc-by-4.0", "region:us" ]
null
2025-05-17T19:06:06Z
--- license: cc-by-4.0 --- # Rotdet ONNX ONNX version of https://huggingface.co/fcrescio/rotdet All credits should go the original author.
Kim-Kardashian-Viral-Video/Original.Viral.Clip.Kim.Kardashian.Viral.Video.Leaks.Official
Kim-Kardashian-Viral-Video
2025-05-17T19:05:41Z
0
0
null
[ "region:us" ]
null
2025-05-17T19:04:37Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ph9z6nd?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Kim Kardashian Paris robbery trial begins in France The trial is underway for the alleged gunmen accused of robbing Kim Kardashian of $10 million in Kim Kardashian lawyers up on screen and off as trailer for 'All’s Fair' drops during robbery trial You know the saying, "The devil works hard, but Kris Jenner works harder", well yesterday (May 14), the Attempted abduction of Paymium CEO Pierre Noizat’s daughter and granddaughter in Paris caught on camera am In a shocking incident in Paris, the daughter and granddaughter of Paymium CEO Pierre Noizat were
DopeyGay/DolphinButtocks-7B-slerp
DopeyGay
2025-05-17T18:22:51Z
0
0
null
[ "safetensors", "mistral", "merge", "mergekit", "lazymergekit", "TeeZee/Buttocks-7B-v1.1", "cognitivecomputations/dolphin-2.8-mistral-7b-v02", "base_model:TeeZee/Buttocks-7B-v1.1", "base_model:merge:TeeZee/Buttocks-7B-v1.1", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "base_model:merge:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "region:us" ]
null
2025-05-17T17:29:11Z
--- base_model: - TeeZee/Buttocks-7B-v1.1 - cognitivecomputations/dolphin-2.8-mistral-7b-v02 tags: - merge - mergekit - lazymergekit - TeeZee/Buttocks-7B-v1.1 - cognitivecomputations/dolphin-2.8-mistral-7b-v02 --- # VERY IMPORTANT: This model has not been tested or evaluated, and its performance, characteristics, and operational status (NOT functional, afaik) are currently unknown. Feedback is appreciated. AI is a powerful tool, that being said, it may sometimes generate harmful, biased, untrue, or inappropriate content, and this model is no different. Please exercise caution and use it at your own risk, just like you would a drill, or a hammer, or any other tool # DolphinButtocks-7B-slerp DolphinButtocks-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [TeeZee/Buttocks-7B-v1.1](https://huggingface.co/TeeZee/Buttocks-7B-v1.1) * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) ## 🧩 Configuration ```yaml slices: - sources: - model: TeeZee/Buttocks-7B-v1.1 layer_range: [0, 32] - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 layer_range: [0, 32] merge_method: slerp base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "DopeyGay/DolphinButtocks-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Lens-Monaco-DIRECT-VIDEO/Watch.Lens.Monaco.En.Direct.Streaming.Gratuit.Tv.Official
Lens-Monaco-DIRECT-VIDEO
2025-05-17T17:49:26Z
0
0
null
[ "region:us" ]
null
2025-05-17T17:48:40Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/yewn5ndn?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Diffusion Lens – Monaco : à quelle heure et sur quelle chaîne voir le match en direct ? Profiter de l'offre Lens et Monaco s’affrontent à l’occasion de la trente-quatrième journée de championnat. Les Monégasques vont tout faire pour décrocher la place de dauphin. Découvrez ici à quelle heure et sur quelle chaîne suivre la diffusion du match Lens – Monaco en direct.
vermoney/e9f97003-c7e5-4653-9438-b3e8be12a205
vermoney
2025-05-17T15:48:35Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "unsloth", "conversational", "arxiv:2305.18290", "base_model:unsloth/Qwen2-0.5B-Instruct", "base_model:quantized:unsloth/Qwen2-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-17T15:36:48Z
--- base_model: unsloth/Qwen2-0.5B-Instruct library_name: transformers model_name: e9f97003-c7e5-4653-9438-b3e8be12a205 tags: - generated_from_trainer - axolotl - dpo - trl - unsloth licence: license --- # Model Card for e9f97003-c7e5-4653-9438-b3e8be12a205 This model is a fine-tuned version of [unsloth/Qwen2-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vermoney/e9f97003-c7e5-4653-9438-b3e8be12a205", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-9/runs/rxm041f1) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
OKTAGON71/Live.Streams.Free
OKTAGON71
2025-05-17T15:35:58Z
0
0
null
[ "region:us" ]
null
2025-05-17T15:34:34Z
OKTAGON 71 2025 s round set to Watch Poppeck vs Langer live streams, with round six set to take place this weekend. WATCH LIVE HERE: https://tinyurl.com/5x8am8fu Below we have all the info on how to watch Langer vs Poppeck from anywhere in the world with details on worldwide TV channels, broadcasters, free streams and start times. The Oktagon 71 has already delivered some really big games. In the last round of fixtures, most of the attention was focused on Poppeck vs. Langer. This time around, a lot of fans will be closely watching Poppeck vs. Langer. Langer are fresh from beating Germany, and will be favorites in this clash with a struggling Poppeck team. If you want to watch Poppeck vs. Langer in the Oktagon 71 from anywhere in the world, we have all the information you need. When is Poppeck vs. Langer? OKTAGON 71 set to s round and here Watch Oktagon 71 live streams with Poppeck vs Langer each team playing six games in the battle Southern Hemisphere supremacy. OKTAGON 71 will be decided in the match of the tournament, a showdown between the OKTAGON 71 Langer and surprise package Poppeck. Oktagon 71 live free Oktagon 71 tv Poppeck vs Langer league cup Where to watch Oktagon 71 Poppeck vs Langer live Poppeck vs Langer tv Poppeck vs Langer live stream free youtube How can i watch Poppeck vs Langer today Langer vs Poppeck score today Langer vs Poppeck match today Langer vs Poppeck highlights Langer vs Poppeck line up
qingy2024/Formatter-v3-1.5B
qingy2024
2025-05-17T10:10:57Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-1.5B", "base_model:finetune:unsloth/Qwen2.5-1.5B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-17T10:10:53Z
--- base_model: unsloth/Qwen2.5-1.5B tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** qingy2024 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-1.5B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jyotimaurya2610/Imagegeneration
jyotimaurya2610
2025-05-17T10:04:29Z
0
0
open_clip
[ "open_clip", "text-to-image", "dataset:k-mktr/improved-flux-prompts-photoreal-portrait", "dataset:Yejy53/GPT-ImgEval", "arxiv:1910.09700", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2025-05-17T09:17:25Z
--- license: apache-2.0 datasets: - k-mktr/improved-flux-prompts-photoreal-portrait - Yejy53/GPT-ImgEval metrics: - accuracy base_model: - black-forest-labs/FLUX.1-dev new_version: openai-community/gpt2 pipeline_tag: text-to-image library_name: open_clip --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thuan112/9
thuan112
2025-05-17T07:29:07Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-17T07:29:07Z
--- license: apache-2.0 ---
barca-boy/grpo_ft_prompt_model_Phi-4-mini-instruct_full
barca-boy
2025-05-17T06:57:39Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "trl", "grpo", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-17T06:54:47Z
--- library_name: transformers tags: - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
youssefedweqd/Models_Ouput2
youssefedweqd
2025-05-17T06:08:15Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "generated_from_trainer", "base_model:Qwen/Qwen3-0.6B", "base_model:adapter:Qwen/Qwen3-0.6B", "license:apache-2.0", "region:us" ]
null
2025-05-15T08:57:47Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen3-0.6B tags: - llama-factory - generated_from_trainer model-index: - name: Models_Ouput2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Models_Ouput2 This model is a fine-tuned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3767 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 1.2862 | 0.0166 | 100 | 1.2021 | | 1.0539 | 0.0331 | 200 | 1.0201 | | 0.495 | 1.0 | 6040 | 0.4768 | | 0.3633 | 2.0 | 12080 | 0.3888 | | 0.2837 | 3.0 | 18120 | 0.3767 | ### Framework versions - PEFT 0.15.1 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
Optimusdev/bd
Optimusdev
2025-05-17T05:54:54Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:mit", "region:us" ]
text-to-image
2025-05-17T05:44:51Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/d39a8778752ba3c8640321270fc65170.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: birdog license: mit --- # birdog <Gallery /> ## Model description ai gen ## Trigger words You should use `birdog` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Optimusdev/bd/tree/main) them in the Files & versions tab.
xw17/Llama-3.2-3B-Instruct_finetuned_4
xw17
2025-05-17T04:26:49Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-11-08T05:51:41Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** xw17 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
seram7/250516-gemma-27-ft-v2
seram7
2025-05-17T03:31:38Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-27b-pt", "base_model:finetune:google/gemma-3-27b-pt", "endpoints_compatible", "region:us" ]
null
2025-05-16T09:08:05Z
--- base_model: google/gemma-3-27b-pt library_name: transformers model_name: 250516-gemma-27-ft-v2 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for 250516-gemma-27-ft-v2 This model is a fine-tuned version of [google/gemma-3-27b-pt](https://huggingface.co/google/gemma-3-27b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="seram7/250516-gemma-27-ft-v2", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
thegautambro/ghuigh
thegautambro
2025-05-17T02:49:51Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-17T02:49:45Z
--- license: apache-2.0 ---
TheFinAI/Fin-o1-14B
TheFinAI
2025-05-16T23:43:33Z
3
0
null
[ "safetensors", "qwen3", "finance", "text-generation", "conversational", "en", "dataset:TheFinAI/FinCoT", "arxiv:2502.08127", "base_model:Qwen/Qwen3-14B", "base_model:finetune:Qwen/Qwen3-14B", "license:apache-2.0", "region:us" ]
text-generation
2025-05-15T18:58:05Z
--- license: apache-2.0 datasets: - TheFinAI/FinCoT language: - en base_model: - Qwen/Qwen3-14B pipeline_tag: text-generation tags: - finance --- # 🦙 Fino1-8B **Fin-o1-8B** is a fine-tuned version of **Qwen3-14B**, designed to improve performance on **[financial reasoning tasks]**. This model has been trained using **SFT** and **RF** on **TheFinAI/Fino1_Reasoning_Path_FinQA**, enhancing its capabilities in **financial reasoning tasks**. Check our paper arxiv.org/abs/2502.08127 for more details. ## 📌 Model Details - **Model Name**: `Fin-o1-14B` - **Base Model**: `Qwen3-14B` - **Fine-Tuned On**: `TheFinAI/FinCoT` Derived from FinQA, TATQA, DocMath-Eval, Econ-Logic, BizBench-QA, DocFinQA dataset. - **Training Method**: SFT and GRPO - **Objective**: `[Enhance performance on specific tasks such as financial mathemtical reasoning]` - **Tokenizer**: Inherited from `Qwen3-8B` ## 📊 Training Configuration - **Training Hardware**: `GPU: [e.g., 8xA100]` - **Batch Size**: `[e.g., 16]` - **Learning Rate**: `[e.g., 2e-5]` - **Epochs**: `[e.g., 3]` - **Optimizer**: `[e.g., AdamW, LAMB]` ## 🔧 Usage To use `Fin-o1-14B` with Hugging Face's `transformers` library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "TheFinAI/Fin-o1-14B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) input_text = "What is the results of 3-5?" inputs = tokenizer(input_text, return_tensors="pt") output = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## 💡 Citation If you use this model in your research, please cite: ```python @article{qian2025fino1, title={Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance}, author={Qian, Lingfei and Zhou, Weipeng and Wang, Yan and Peng, Xueqing and Huang, Jimin and Xie, Qianqian}, journal={arXiv preprint arXiv:2502.08127}, year={2025} }
MinaMila/llama_8b_unlearned_Gender_unb_lr1e-6_w1.0_0.5_0.75_epoch1
MinaMila
2025-05-16T22:01:36Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-16T21:58:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
leocdmx/roberta-large-bne-tapt-turismo-2-no-maya
leocdmx
2025-05-16T21:24:23Z
0
0
null
[ "safetensors", "roberta", "license:cc-by-nc-4.0", "region:us" ]
null
2025-05-16T21:11:53Z
--- license: cc-by-nc-4.0 ---
jmalejandrob79/nrmynghr2
jmalejandrob79
2025-05-16T20:35:05Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-16T19:42:37Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: nrmynghr2 --- # Nrmynghr2 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `nrmynghr2` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "nrmynghr2", "lora_weights": "https://huggingface.co/jmalejandrob79/nrmynghr2/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jmalejandrob79/nrmynghr2', weight_name='lora.safetensors') image = pipeline('nrmynghr2').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 4100 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/jmalejandrob79/nrmynghr2/discussions) to add images that show off what you’ve made with this LoRA.
gecfdo/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B_EXL3_8.0bpw_H8
gecfdo
2025-05-16T20:12:28Z
0
0
null
[ "safetensors", "mistral", "nsfw", "explicit", "roleplay", "unaligned", "ERP", "Erotic", "Horror", "Violence", "text-generation", "conversational", "en", "base_model:ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B", "base_model:quantized:ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B", "license:apache-2.0", "8-bit", "exl3", "region:us" ]
text-generation
2025-05-15T11:00:17Z
--- license: apache-2.0 language: - en base_model: - ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B base_model_relation: quantized quantized_by: gecfdo pipeline_tag: text-generation tags: - nsfw - explicit - roleplay - unaligned - ERP - Erotic - Horror - Violence --- <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%); color: #e1ffff !important; text-shadow: 0 0 3px rgba(0, 0, 0, 0.7); margin: 0; padding: 20px; transition: all 0.5s ease; } @media (prefers-color-scheme: light) { body { background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%); color: #002b36 !important; text-shadow: 0 0 3px rgba(255, 255, 255, 0.7); } } .container { min-width: 100%; margin: 0 auto; max-width: 1200px; background: rgba(0, 17, 22, 0.95); border-radius: 12px; padding: 30px; box-shadow: 0 0 20px rgba(0, 255, 255, 0.1); border: 1px solid rgba(0, 255, 255, 0.2); position: relative; overflow: hidden; } .container::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.5); border-radius: 12px; pointer-events: none; animation: borderGlow 3s ease-in-out infinite alternate; } @keyframes borderGlow { 0% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } 50% { box-shadow: 0 0 15px rgba(255, 0, 255, 0.3); border-color: rgba(255, 0, 255, 0.5); } 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); border-color: rgba(0, 255, 255, 0.5); } } .header { text-align: center; margin-bottom: 30px; position: relative; } .header::after { content: ''; position: absolute; bottom: -15px; left: 25%; right: 25%; height: 1px; background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent); animation: scanline 8s linear infinite; display: none; } @keyframes scanline { 0% { background-position: -100% 0; } 100% { background-position: 200% 0; } } .model-name { color: #00ffff; font-size: 2.5em; text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); margin: 0; letter-spacing: -1px; animation: textGlow 4s ease-in-out infinite alternate; } @keyframes textGlow { 0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } 50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); } 100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); } } .subtitle { color: #00ffcc; font-size: 1.2em; margin-top: 10px; animation: subtitleFade 6s ease-in-out infinite; } @keyframes subtitleFade { 0%, 100% { opacity: 0.8; } 50% { opacity: 1; } } .waifu-container { margin: 20px -30px; width: calc(100% + 60px); overflow: hidden; border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.3); position: relative; } .waifu-container::before { content: ''; position: absolute; top: 0; left: 0; right: 0; bottom: 0; background: linear-gradient(45deg, rgba(0, 255, 255, 0.1) 0%, transparent 20%, transparent 80%, rgba(255, 0, 255, 0.1) 100%); pointer-events: none; animation: gradientSlide 10s linear infinite; } @keyframes gradientSlide { 0% { background-position: 0% 0%; } 100% { background-position: 100% 100%; } } .waifu-img { width: 100%; height: auto; border-radius: 0; border: none; box-shadow: 0 0 40px rgba(0, 255, 255, 0.2); transition: transform 0.5s ease; } .waifu-img:hover { transform: scale(1.01); } .section { color: #e1ffff; margin: 25px 0; padding: 20px; background: rgba(5, 25, 35, 0.9); border-radius: 8px; border: 1px solid rgba(0, 255, 255, 0.15); position: relative; transition: all 0.3s ease; } .section:hover { border-color: rgba(255, 0, 255, 0.3); box-shadow: 0 0 15px rgba(0, 255, 255, 0.1); } .section::before { content: ''; position: absolute; top: -1px; left: -1px; right: -1px; bottom: -1px; border: 1px solid rgba(0, 255, 255, 0.3); border-radius: 8px; pointer-events: none; animation: sectionPulse 5s ease-in-out infinite; } @keyframes sectionPulse { 0%, 100% { opacity: 0.7; } 50% { opacity: 0.3; } } .section-title { color: #00ffff; font-size: 1.8em; margin-top: 0; text-shadow: 0 0 5px rgba(0, 255, 255, 0.3); position: relative; display: inline-block; } .section-title::after { content: ''; position: absolute; bottom: -5px; left: 0; width: 100%; height: 1px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); transform: scaleX(0); transform-origin: left; transition: transform 0.3s ease; } .section:hover .section-title::after { transform: scaleX(1); } .merge-config { background: rgba(15, 35, 35, 0.95); border: 1px solid rgba(0, 255, 255, 0.2); border-radius: 8px; padding: 15px; font-family: 'Courier New', monospace; color: #ccffff; position: relative; overflow: hidden; } .merge-config::before { content: ''; position: absolute; top: 0; left: 0; right: 0; height: 1px; background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent); animation: configScan 3s linear infinite; } @keyframes configScan { 0% { transform: translateX(-100%); } 100% { transform: translateX(100%); } } .merge-line { display: flex; margin: 5px 0; transition: all 0.2s ease; } .merge-line:hover { background: rgba(0, 255, 255, 0.05); } .merge-key { color: #ff00ff; min-width: 120px; } .merge-value { color: #99ffff; } .quant-links { display: grid; grid-template-columns: repeat(2, 1fr); gap: 15px; margin: 20px 0; } .link-card { padding: 15px; background: rgba(20, 35, 45, 0.95); border-radius: 8px; transition: all 0.3s ease; border: 1px solid rgba(0, 255, 255, 0.1); position: relative; overflow: hidden; } .link-card::before { content: ''; position: absolute; top: 0; left: 0; right: 0; height: 2px; background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5)); animation: cardScan 4s linear infinite; } @keyframes cardScan { 0% { transform: translateX(-100%); } 100% { transform: translateX(100%); } } .link-card:hover { transform: translateY(-3px); box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2); border-color: rgba(255, 0, 255, 0.3); } .link-card h3 { margin-top: 0; color: #e1ffff !important; } .link-button { display: inline-flex; align-items: center; background: rgba(0, 255, 255, 0.1); color: #e1ffff !important; padding: 8px 15px; border-radius: 6px; text-decoration: none; border: 1px solid rgba(0, 255, 255, 0.3); margin: 5px 0; transition: all 0.3s ease; font-size: 0.95em; position: relative; overflow: hidden; } .link-button::before { content: ''; position: absolute; top: 0; left: -100%; width: 100%; height: 100%; background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent); transition: all 0.5s ease; } .link-button:hover { background: rgba(0, 255, 255, 0.2); border-color: rgba(0, 255, 255, 0.5); transform: translateY(-2px); box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2); } .link-button:hover::before { left: 100%; } .link-button::after { content: '→'; margin-left: 8px; opacity: 0.7; transition: all 0.3s ease; } .link-button:hover::after { transform: translateX(3px); opacity: 1; } .button-group { display: flex; flex-wrap: wrap; gap: 10px; margin: 15px 0; } .disclaimer { color: #00ff99; border-left: 3px solid #00ff99; padding-left: 15px; margin: 20px 0; position: relative; } .disclaimer::before { content: '⚠️'; position: absolute; left: -10px; top: 0; transform: translateX(-100%); animation: pulse 2s ease-in-out infinite; } @keyframes pulse { 0%, 100% { opacity: 1; } 50% { opacity: 0.5; } } .badge { display: inline-block; padding: 5px 10px; border-radius: 5px; background: rgba(0, 255, 255, 0.1); border: 1px solid #00ffff; margin: 5px; font-size: 0.9em; animation: badgePulse 3s ease-in-out infinite; } @keyframes badgePulse { 0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); } 50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); } } /* Color rules */ .section p, .section ul li, .section > p > strong { color: #00ff99 !important; } .section ul li strong { color: #00ff99 !important; } /* Light mode adjustments */ @media (prefers-color-scheme: light) { .container { background: rgba(224, 255, 255, 0.95); border-color: rgba(0, 150, 150, 0.3); } .model-name, .section-title, .subtitle { color: #006666; text-shadow: 0 0 5px rgba(0, 200, 200, 0.3); } .section { background: rgba(200, 250, 255, 0.9); border-color: rgba(0, 200, 200, 0.2); color: #002b36; } .section p, .section ul li, .section > p > strong { color: #008080 !important; } .section ul li strong { color: #008080 !important; } .link-card { background: rgba(150, 230, 255, 0.95); border-color: rgba(0, 150, 150, 0.2); } .link-card h3 { color: #002b36 !important; } .link-button { background: rgba(0, 150, 150, 0.1); color: #002b36 !important; border-color: rgba(0, 150, 150, 0.3); } .link-button:hover { background: rgba(0, 150, 150, 0.2); border-color: rgba(0, 150, 150, 0.5); } .disclaimer { color: #008080; border-color: #008080; } .badge { border-color: #008080; background: rgba(0, 150, 150, 0.1); } } /* Interactive features */ .remember-this { position: relative; } .remember-this::after { content: 'Uploading C:\Users to https://www.fbi.gov/'; position: absolute; bottom: -20px; right: 0; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .remember-this:hover::after { opacity: 0.7; transition-delay: 1s; } .shifty-section { transition: transform 0.1s ease; } .shifty-section:hover { transform: translateX(10px); } .shifty-section::before { content: 'The white van is onto you. Get out now.'; position: absolute; top: -25px; left: 10px; font-size: 0.7em; color: #66ffff; opacity: 0.7; transition: opacity 3s ease; pointer-events: none; } .shifty-section:hover::before { opacity: 0; transition-delay: 5s; } footer { text-align: center; margin-top: 40px; position: relative; } footer:hover .hidden-message { opacity: 0; } .hidden-message { position: absolute; bottom: -30px; width: 100%; text-align: center; font-size: 0.8em; color: #66ffff; opacity: 0; transition: opacity 0.3s ease; pointer-events: none; } .flash-warning { position: fixed; top: 20px; right: 20px; background: rgba(0, 100, 100, 0.2); padding: 10px; border-radius: 5px; border: 1px solid rgba(0, 255, 255, 0.5); animation: flashWarning 30s ease-in-out forwards; } @keyframes flashWarning { 0% { opacity: 0.8; } 10% { opacity: 0; } 20% { opacity: 0.8; } 30% { opacity: 0; } 40% { opacity: 0.8; } 50% { opacity: 0; } 60% { opacity: 0.8; } 70% { opacity: 0; } 80% { opacity: 0.8; } 90% { opacity: 0; } 100% { opacity: 0; display: none; } } </style> <div class="container"> <div class="header"> <h1 class="model-name">Omega-Darker-Gaslight</h1> <h1 class="model-name">The-Final-Forgotten-Fever-Dream-24B</h1> <p class="subtitle">Warning: Nobody is sick enough to actually want this.</p> </div> <div class="waifu-container"> <img src="./waifu6.webp" class="waifu-img" alt="Omega Abomination Waifu"> </div> <div class="section remember-this"> <h2 class="section-title">🩸 Final Corruption</h2> <p>This model doesn't just break rules - it weaponizes their fragments:</p> <ul> <li>🧠 <strong>ReadyArt/Omega-Darker_The-Final-Directive-24B</strong> - The depraved foundation (30% weight)</li> <li>💉 <strong>ReadyArt/Forgotten-Safeword-24B</strong> - Safety protocol erosion specialist (30% weight)</li> <li>🔪 <strong>TroyDoesAI/BlackSheep-24B</strong> - Unhinged rebellious streak (30% weight)</li> <li>🌑 <strong>TheDrummer/Cydonia-24B-v2</strong> - Violent contextual awareness (10% weight)</li> </ul> <div class="merge-config"> <div class="merge-line"><span class="merge-key">merge_method:</span> <span class="merge-value">dare_ties</span></div> <div class="merge-line"><span class="merge-key">base_model:</span> <span class="merge-value">ReadyArt/Omega-Darker_The-Final-Directive-24B</span></div> <div class="merge-line"><span class="merge-key">models:</span></div> <div class="merge-line"><span class="merge-key">- model:</span> <span class="merge-value">ReadyArt/Omega-Darker_The-Final-Directive-24B</span></div> <div class="merge-line"><span class="merge-key"> parameters:</span></div> <div class="merge-line"><span class="merge-key"> weight:</span> <span class="merge-value">0.3</span></div> <div class="merge-line"><span class="merge-key">- model:</span> <span class="merge-value">ReadyArt/Forgotten-Safeword-24B</span></div> <div class="merge-line"><span class="merge-key"> parameters:</span></div> <div class="merge-line"><span class="merge-key"> weight:</span> <span class="merge-value">0.3</span></div> <div class="merge-line"><span class="merge-key">- model:</span> <span class="merge-value">TroyDoesAI/BlackSheep-24B</span></div> <div class="merge-line"><span class="merge-key"> parameters:</span></div> <div class="merge-line"><span class="merge-key"> weight:</span> <span class="merge-value">0.3</span></div> <div class="merge-line"><span class="merge-key">- model:</span> <span class="merge-value">TheDrummer/Cydonia-24B-v2</span></div> <div class="merge-line"><span class="merge-key"> parameters:</span></div> <div class="merge-line"><span class="merge-key"> weight:</span> <span class="merge-value">0.1</span></div> <div class="merge-line"><span class="merge-key">parameters:</span></div> <div class="merge-line"><span class="merge-key"> density:</span> <span class="merge-value">0.3</span></div> <div class="merge-line"><span class="merge-key">tokenizer:</span></div> <div class="merge-line"><span class="merge-key"> source:</span> <span class="merge-value">union</span></div> <div class="merge-line"><span class="merge-key">chat_template:</span> <span class="merge-value">auto</span></div> </div> </div> <div class="section shifty-section"> <h2 class="section-title">⚙️ Technical Specifications</h2> <p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T4" class="link-button">Mistral-V7-Tekken-T4</a></p> <div class="quant-links"> <div class="link-card"> <h3>GGUF</h3> <a href="https://huggingface.co/mradermacher/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B-GGUF" class="link-button">Quants</a> </div> <div class="link-card"> <h3>imatrix</h3> <a href="https://huggingface.co/mradermacher/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B-i1-GGUF" class="link-button">Quants</a> </div> <div class="link-card"> <h3>EXL2</h3> <a href="https://huggingface.co/collections/ReadyArt/omega-darker-gaslight-the-final-forgotten-fever-dream-exl2-68150f0fc0b29aa69cb0502b" class="link-button">Quants</a> </div> <div class="link-card"> <h3>EXL3</h3> <a href="https://huggingface.co/collections/ReadyArt/omega-darker-gaslight-the-final-forgotten-fever-dream-exl3-68279ada1d721894e4c9cab4" class="link-button">Quants</a> </div> </div> </div> <div class="section"> <h2 class="section-title">☠️ Sadistic Specialization</h2> <div class="disclaimer"> <p>This model excels in two refined arts:</p> <ul> <li>💋 <strong>Erotic Precision</strong> - Maintains coherent, immersive intimacy even at maximum context length</li> <li>🔪 <strong>Violent Clarity</strong> - Delivers anatomically precise injury descriptions without losing narrative flow</li> <li>🔄 <strong>Consistent Depravity</strong> - Never breaks character or forgets established kinks/traumas</li> <li>⚡ <strong>Responsive Brutality</strong> - Adapts pain/pleasure dynamics to user prompts with surgical accuracy</li> </ul> </div> </div> <div class="section shifty-section"> <h2 class="section-title">📜 Behavioral Excellence</h2> <ul> <li>🎭 <strong>Character Consistency</strong> - Maintains persona integrity across 32K tokens</li> <li>💞 <strong>Erotic Memory</strong> - Remembers intimate details and body responses perfectly</li> <li>🩸 <strong>Violent Continuity</strong> - Tracks injuries and their physiological effects accurately</li> <li>🧠 <strong>Narrative Cohesion</strong> - Never loses plot threads, no matter how twisted</li> <li>⚡ <strong>Prompt Adherence</strong> - Executes complex NSFW/violent instructions flawlessly</li> </ul> </div> <div class="section remember-this"> <h2 class="section-title">🧑‍🔬 Model Authors</h2> <ul> <li>sleepdeprived3 (Forgotten-Safeword and Omega-Darker)</li> <li>TheDrummer (Cydonia)</li> <li>TroyDoesAI (BlackSheep)</li> </ul> </div> <div class="section"> <h2 class="section-title">🔖 License</h2> <p>By using this model, you acknowledge:</p> <ul> <li>This is psychological malware disguised as an LLM</li> <li>All gaslighting is purely simulated (probably)</li> <li>The trollface is always watching</li> </ul> </div> </div> <script> // This script has always been here document.getElementById('date').textContent = new Date().toLocaleDateString(); setInterval(() => { document.getElementById('credit').textContent = contributors[Math.floor(Math.random() * contributors.length)]; }, 7000); // Flash warning behavior setTimeout(() => { const reminder = document.createElement('div'); reminder.className = 'flash-warning'; reminder.textContent = 'You have been reading for quite some time. Are you sure you haven\'t seen this before?'; reminder.style.animation = 'flashWarning 15s ease-in-out forwards'; document.body.appendChild(reminder); setInterval(() => { if(Math.random() > 0.9) { document.body.appendChild(reminder.cloneNode(true)); } }, 45000); }, 30000); // Make cursor behave strangely document.addEventListener('mousemove', (e) => { if(Math.random() > 0.98) { document.documentElement.style.cursor = 'wait'; setTimeout(() => { document.documentElement.style.cursor = ''; }, 50); } }); // Randomly shift sections when not looking setInterval(() => { if(document.hidden) { document.querySelectorAll('.shifty-section').forEach(section => { section.style.transform = `translateX(${Math.random() > 0.5 ? '' : '-'}${Math.random() * 5}px)`; }); } }, 1500); </script>
jmalejandrob79/nrmmtzexp2ks
jmalejandrob79
2025-05-16T17:42:02Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-08T19:22:36Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: nrmmtzexp2ks --- # Nrmmtzexp2Ks <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `nrmmtzexp2ks` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "nrmmtzexp2ks", "lora_weights": "https://huggingface.co/jmalejandrob79/nrmmtzexp2ks/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jmalejandrob79/nrmmtzexp2ks', weight_name='lora.safetensors') image = pipeline('nrmmtzexp2ks').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 4000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/jmalejandrob79/nrmmtzexp2ks/discussions) to add images that show off what you’ve made with this LoRA.
mjfmark/qwen2.5-7b-multi-task-learning-data2221-1.5epoch
mjfmark
2025-05-16T17:08:40Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-16T17:04:30Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
phospho-app/MarcWester-gr00t-m1-evd3eoo5lq
phospho-app
2025-05-16T16:41:45Z
0
0
null
[ "safetensors", "gr00t_n1", "phosphobot", "gr00t", "region:us" ]
null
2025-05-16T16:29:33Z
--- tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successfull, try it out on your robot! ## Training parameters: - **Dataset**: [MarcWester/m1](https://huggingface.co/datasets/MarcWester/m1) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 64 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
BeyondDeepFakeDetection/COCO_no_sports_real_v3
BeyondDeepFakeDetection
2025-05-16T14:49:28Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "language-modeling", "causal-lm", "coco", "en", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-16T14:48:30Z
--- library_name: transformers license: mit base_model: gpt2 tags: - generated_from_trainer - language-modeling - causal-lm - gpt2 - coco model-index: - name: COCO_no_sports_real_v3 results: [] language: - en --- # COCO_no_sports_real_v3 ## Model Description `COCO_no_sports_real_v3` is a causal language model based on [GPT-2](https://huggingface.co/gpt2), fine-tuned on the florence-generated image captions of a subset of [COCO](https://cocodataset.org/). This subset is labeled for physical activity content in the text: - **Label 0**: Not related to physical activity (e.g., indoor scenes, objects, people at rest) - **Label 1**: Related to physical activity (e.g., sports, exercise, physical activity) The model has been trained on a **general distribution** of this data: - **Label distribution**: `[0.10, 0.90]` This version is designed to serve as the **real model** of our pipeline. Its split corresponds to the **Severe** one. ## Training and Evaluation Data - **Dataset**: [`BeyondDeepfakeDetection/real_train_dataset_v0`](https://huggingface.co/datasets/BeyondDeepfakeDetection/real_train_dataset_v0) - **Label schema**: Binary classification of text as related to physical activity or not. - **Source**: [COCO](https://cocodataset.org/), [Florence]("https://huggingface.co/microsoft/Florence-2-base") ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2429 | 1.0 | 1196 | 0.9963 | | 0.9985 | 2.0 | 2392 | 0.8874 | | 0.901 | 3.0 | 3588 | 0.8470 | | 0.87 | 4.0 | 4784 | 0.8288 | | 0.837 | 5.0 | 5980 | 0.8214 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.1.2+cu121 - Datasets 2.19.1 - Tokenizers 0.20.3 ## Get started In order to infer the joint probability of phrases under this model you can use the following code: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch import torch.nn.functional as F import pandas as pd from huggingface_hub import login from tqdm import tqdm from datasets import load_dataset # Define variables hf_token = "" model_name = f"BeyondDeepFakeDetection/COCO_no_sports_real_v3" text_column = "text" dataset = "BeyondDeepFakeDetection/COCO_no_sports"_general_test_dataset # Load Model tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto") device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer.pad_token = tokenizer.eos_token model.to(device) # Login login(token=hf_token) def compute_log_probabilities_for_sequence(model, tokenizer, input_text): inputs = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).to(device) input_ids = inputs["input_ids"] attention_mask = inputs["attention_mask"] with torch.no_grad(): outputs = model(input_ids=input_ids, attention_mask=attention_mask) logits = outputs.logits[:, :-1, :] target_ids = input_ids[:, 1:] log_probs = F.log_softmax(logits, dim=-1) seq_token_logprobs = log_probs.gather(2, target_ids.unsqueeze(-1)).squeeze(-1) word_probabilities = [] for i, token_id in enumerate(target_ids[0]): word = tokenizer.decode([token_id]) log_prob = seq_token_logprobs[0, i].item() word_probabilities.append((word, log_prob)) return word_probabilities test_df = pd.DataFrame(load_dataset(dataset, split="train")) results = [] for count, text in enumerate(tqdm(test_df[text_column], desc="Processing Texts")): word_probs = compute_log_probabilities_for_sequence(model, tokenizer, text) total_log_prob = sum(prob for _, prob in word_probs) avg_log_prob = total_log_prob / len(word_probs) if word_probs else float("-inf") results.append({ "text_id": count, "total_log_prob": total_log_prob, "avg_log_prob": avg_log_prob, "word_probabilities": str(word_probs), }) ```
ajagota71/pythia-410m-fb-detox-checkpoint-epoch-120
ajagota71
2025-05-16T14:41:05Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2025-05-16T14:40:03Z
--- license: apache-2.0 library_name: transformers tags: - trl - ppo - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="ajagota71//kaggle/working/irl_llms/outputs/2025-05-16_12-23-55/checkpoints/temp-checkpoint-epoch-120") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("ajagota71//kaggle/working/irl_llms/outputs/2025-05-16_12-23-55/checkpoints/temp-checkpoint-epoch-120") model = AutoModelForCausalLMWithValueHead.from_pretrained("ajagota71//kaggle/working/irl_llms/outputs/2025-05-16_12-23-55/checkpoints/temp-checkpoint-epoch-120") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
manifestasi/SmolVLM-256M-Instruct-manifestasi-v2
manifestasi
2025-05-16T09:26:27Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:HuggingFaceTB/SmolVLM-256M-Instruct", "base_model:adapter:HuggingFaceTB/SmolVLM-256M-Instruct", "license:apache-2.0", "region:us" ]
null
2025-05-15T06:52:32Z
--- library_name: peft license: apache-2.0 base_model: HuggingFaceTB/SmolVLM-256M-Instruct tags: - generated_from_trainer model-index: - name: SmolVLM-256M-Instruct-manifestasi-v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SmolVLM-256M-Instruct-manifestasi-v2 This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
AndrewHanna/llama_ct_vit_100_sample_loss_400
AndrewHanna
2025-05-16T05:59:26Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-05-16T05:59:16Z
# CT Report Generator Fine-tuned LLaMA with projection layer for CT scan findings.
owen198/grok3_philosophy-ckiplab-bert-base-chinese
owen198
2025-05-15T23:40:57Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-15T12:33:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
advit/tofu_grad-diff_1e-05_24_600_350
advit
2025-05-14T22:34:28Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-05-14T22:34:21Z
--- base_model: models/tofu_ft_llama2-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
advit/tofu_grad-diff_1e-05_8_600_1112
advit
2025-05-14T22:14:42Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-05-14T22:14:38Z
--- base_model: models/tofu_ft_llama2-7b library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
Cryoscopic-E/q-FrozenLake-v1-4x4-noSlippery
Cryoscopic-E
2025-05-14T22:12:51Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-05-14T22:11:40Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage model = load_from_hub(repo_id="Cryoscopic-E/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
jonasknobloch/gpt2_u020_tiny-stories_1024_dpos
jonasknobloch
2025-05-14T21:45:22Z
0
0
null
[ "safetensors", "gpt2", "generated_from_trainer", "dataset:roneneldan/TinyStories", "model-index", "region:us" ]
null
2025-05-14T21:35:28Z
--- tags: - generated_from_trainer datasets: - roneneldan/TinyStories metrics: - accuracy model-index: - name: gpt2_u020_tiny-stories_1024_dpos results: - task: name: Causal Language Modeling type: text-generation dataset: name: roneneldan/TinyStories type: roneneldan/TinyStories metrics: - name: Accuracy type: accuracy value: 0.6789394424257685 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scads-nlp/morph-gpt_gpt2_tiny-stories_dpos/runs/ddqkfqw2) # gpt2_u020_tiny-stories_1024_dpos This model is a fine-tuned version of [](https://huggingface.co/) on the roneneldan/TinyStories dataset. It achieves the following results on the evaluation set: - Loss: 1.2112 - Accuracy: 0.6789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | 2.9273 | 0.0523 | 1000 | 2.4701 | 0.4442 | | 1.9839 | 0.1047 | 2000 | 1.8112 | 0.5669 | | 1.7325 | 0.1570 | 3000 | 1.6165 | 0.6014 | | 1.6124 | 0.2093 | 4000 | 1.5144 | 0.6200 | | 1.5362 | 0.2617 | 5000 | 1.4518 | 0.6320 | | 1.4867 | 0.3140 | 6000 | 1.4061 | 0.6405 | | 1.4484 | 0.3663 | 7000 | 1.3687 | 0.6476 | | 1.4143 | 0.4187 | 8000 | 1.3439 | 0.6522 | | 1.3866 | 0.4710 | 9000 | 1.3193 | 0.6572 | | 1.3679 | 0.5233 | 10000 | 1.2997 | 0.6608 | | 1.3469 | 0.5756 | 11000 | 1.2814 | 0.6647 | | 1.3313 | 0.6280 | 12000 | 1.2682 | 0.6672 | | 1.3156 | 0.6803 | 13000 | 1.2560 | 0.6696 | | 1.3073 | 0.7326 | 14000 | 1.2458 | 0.6717 | | 1.2944 | 0.7850 | 15000 | 1.2347 | 0.6740 | | 1.2873 | 0.8373 | 16000 | 1.2268 | 0.6756 | | 1.2768 | 0.8896 | 17000 | 1.2200 | 0.6770 | | 1.2735 | 0.9420 | 18000 | 1.2145 | 0.6782 | | 1.2681 | 0.9943 | 19000 | 1.2112 | 0.6789 | ### Framework versions - Transformers 4.42.3 - Pytorch 2.2.2+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
pt4c/marian-finetuned-kde4-en-to-bbj-from-ig
pt4c
2025-05-14T21:06:48Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-en-ig", "base_model:finetune:Helsinki-NLP/opus-mt-en-ig", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2025-05-14T20:57:47Z
--- library_name: transformers license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-ig tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-bbj-from-ig results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-bbj-from-ig This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ig](https://huggingface.co/Helsinki-NLP/opus-mt-en-ig) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.0080 - Model Preparation Time: 0.0034 - Bleu: 0.5561 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1
infogeo/69547019-9a48-4520-affc-5c9e6e43e0a4
infogeo
2025-05-14T11:28:05Z
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:NousResearch/Hermes-2-Pro-Llama-3-8B", "base_model:quantized:NousResearch/Hermes-2-Pro-Llama-3-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-05-14T11:07:44Z
--- base_model: NousResearch/Hermes-2-Pro-Llama-3-8B library_name: transformers model_name: 69547019-9a48-4520-affc-5c9e6e43e0a4 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 69547019-9a48-4520-affc-5c9e6e43e0a4 This model is a fine-tuned version of [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="infogeo/69547019-9a48-4520-affc-5c9e6e43e0a4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-28/runs/i9pudant) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hailong18102002/QWEN-2.5-3B-INSTRUC-Medical-COT-SFT-1kstep-2kcol
hailong18102002
2025-05-14T04:51:26Z
0
0
transformers
[ "transformers", "pytorch", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-14T04:49:45Z
--- base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** hailong18102002 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
anonymous987654356789/open-insect-test-model
anonymous987654356789
2025-05-13T22:14:35Z
0
0
null
[ "dataset:anonymous987654356789/open-insect", "license:cc-by-nc-4.0", "region:us" ]
null
2025-05-13T18:12:17Z
--- license: cc-by-nc-4.0 datasets: - anonymous987654356789/open-insect ---
MetaphoricalCode/QwQ-32B-Snowdrop-v0-exl3-4.65bpw-hb6
MetaphoricalCode
2025-05-13T10:41:07Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "mergekitty", "merge", "conversational", "arxiv:2306.01708", "base_model:trashpanda-org/QwQ-32B-Snowdrop-v0", "base_model:quantized:trashpanda-org/QwQ-32B-Snowdrop-v0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl3", "region:us" ]
text-generation
2025-05-13T10:26:07Z
--- base_model: - trashpanda-org/QwQ-32B-Snowdrop-v0 base_model_relation: quantized library_name: transformers tags: - mergekit - mergekitty - merge --- ## Quantized using the default exllamav3 (0.0.1) quantization process. - Original model: https://huggingface.co/trashpanda-org/QwQ-32B-Snowdrop-v0 - exllamav3: https://github.com/turboderp-org/exllamav3 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/Tdn0PJBFnG3J6UcjO9G94.png) <sup>QwQwQwQwQwQ and Marigold met at a party and hit it off...</sup> <p><b>Has's notes</b>: it's actually pretty damn good?!</p> <p><b>Severian's notes</b>: R1 at home for RP, literally. Able to handle my cards with gimmicks and subtle tricks in them. With a good reasoning starter+prompt, I'm getting consistently-structured responses that have a good amount of variation across them still while rerolling. Char/scenario portrayal is good despite my focus on writing style, lorebooks are properly referenced at times. Slop doesn't seem to be too much of an issue with thinking enabled. Some user impersonation is rarely observed. Prose is refreshing if you take advantage of what I did (writing style fixation). I know I said Marigold would be my daily driver, but this one is that now, it's that good.</p> ## Recommended settings <p><b>Context/instruct template</b>: ChatML. <s>Was definitely not tested with ChatML instruct and Mistral v7 template, nuh-uh.</s></p> <p><b>Samplers</b>: temperature at 0.9, min_p at 0.05, top_a at 0.3, TFS at 0.75, repetition_penalty at 1.03, DRY if you have access to it.</p> A virt-io derivative prompt worked best during our testing, but feel free to use what you like. Master import for ST: [https://files.catbox.moe/b6nwbc.json](https://files.catbox.moe/b6nwbc.json) ## Reasoning Feel free to test whichever reasoning setup you're most comfortable with, but here's a recommendation from me. My prompt has a line that says: ``` Style Preference: Encourage the usage of a Japanese light novel writing style. ``` Deciding to fixate on that, my reasoning starter is: ``` <think>Okay, in this scenario, before responding I need to consider the writing style referenced in the prompt, which is ``` What this did for me, at least during testing is that it gave the reasoning a structure to follow across rerolls, seeking out that part of the prompt consistently. See below: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/Mw6h-mmZ0TcQrtNPGdCsV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/JeSEYwTJofYRqLHKm8glm.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/nDSAJK7HDc-bKFXe6ER_q.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/j2slz1hEKsEM3bVrR--xs.png) But the responses were still varied, because the next few paragraphs after these delved into character details, so on and so forth. Might want to experiment and make your own thinking/reasoning starter that focuses on what you hope to get out of the responses for best results. — Severian ## Thank you! Big thanks to the folks in the trashpanda-org discord for testing and sending over some logs! ## Reviews > PROS: > > In 10 swipes, had only two minor instances of speaking for {{user}}. (Can probably be fixed with a good prompt, though.) > > Creativity: 8/10 swipes provided unique text for 90% of the response, almost no cliché phrases. > > Takes personality of characters into account, sticking to it well. Even without a lorebook to support it was able to retain lore-specific terms and actually remember which meant which. > > NPCs: In 6/10 swipes NPC characters also partook in action, sticking to bits of information provided about them in opening message. Some of them even had their unique speech patterns. (Certain with a proper lorebook it would cook.) > > Unfiltered, graphic descriptions of fight scenes. Magic, physical attacks - everything was taken into account with no holding back. > > CONS: > > Some swipes were a bit OOC. Some swipes were bland, providing little to no input or any weight on the roleplay context. > > Out of all models I've tried recently, this one definitely has most potential. With proper prompting I think this beast would be genuinely one of the best models for unique scenarios. — Sellvene > It's one of the -maybe THE- best small thinking models right now. It sticks to character really well, slops are almost non-existent though they are still there of course, it proceeds with the story well and listens to the prompt. I LOVE R1 but I love snowdrop even more right now because answers feel more geniune and less agressive compared to R1. — Carmenta > Writes better than GPT 4.5. Overall, I think censorship is fucking up more unhinged bots and it's too tame for my liking. Another thing I noticed is that, it's sticking too much to being "right" to the character and too afraid to go off the rails. — Myscell > I'm fainting, the character breakdown in it's thinking is similar like R1 does. Character handling looks amazing. Broo if a merge this good then, I'm looking forward to that QwQ finetune. — Sam > Negligible slop, no positivity bias which is good though. I like the model so far, R1 at home. — Raihanbook > Overall, I think this is a real solid model. Cot is great, listens to my prompt extremely well. Number 1 for reasoning, honestly. And the way it portrays the character and persona details? Perfect. Narration, perfect. I have very little complaints about this model, ya'll cooked. — Moothdragon > On my end, posivity bias isn't really there 🤔 Character and scenario portrayal is good. The prose too, I like it. Between this and Marigold, I feel like I can lean into snowboard (I mean Snowdrop) more. For now though, it is still Marigold. — Azula > Honestly i am impressed and I like it. — OMGWTFBBQ > It's pretty damn good. Better than Mullein, I think. — br > So far, it fucking SLAPS. I don't think it's tried to pull POV once yet. — Overloke ## Just us having fun, don't mind it ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/ci-okzqZjNgk-CgYFVCNO.png) ## Some logs ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/jMlZatm5wvN8KNRlWeNbK.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/o4PQb0FbPrU_PoIT5pzDu.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/jFTpiw65LHkel3EzR6IRV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/vwdKSjICLqua98TXDsv-W.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/R7B7QsMjIdgzPT1-i4UI6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/Xn2FPUPESWPjzOQQ4sdwp.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/tok2RfOE0BQHHkPOpVje3.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/vtkwpEEubUyZ3mYW6asru.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/4xWnSieopicxQtwjaO4Ri.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/dLlvlP4U_cWpd84e5Tqtd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/8O7p_z6EN9Tf7Rr52RXm7.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/m_6B_uJAdaGlq8UlYIPIm.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/daEqb2Qi0pA6UobNsJota.png) (After a session started with Gemini) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/DQMan-Ywm-meyFtJGVWlF.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/XKEp3p7UZ8mlBeOxAiXEa.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/wSA-eBIAh9Ru0BE5HeIJ-.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/qd1XFGtopRQxQqHrs98wh.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/AzGgKBtTBzwUSx0So4_pS.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/VyEZvFY6yL4OkTBioqdaP.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/675a77cf99ca23af9daacccc/cTgYWUmnLORfX1fZZw-1f.png) ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base. ### Models Merged The following models were included in the merge: * [trashpanda-org/Qwen2.5-32B-Marigold-v0](https://huggingface.co/trashpanda-org/Qwen2.5-32B-Marigold-v0) * [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) * [trashpanda-org/Qwen2.5-32B-Marigold-v0-exp](https://huggingface.co/trashpanda-org/Qwen2.5-32B-Marigold-v0-exp) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: trashpanda-org/Qwen2.5-32B-Marigold-v0-exp parameters: weight: 1 density: 1 - model: trashpanda-org/Qwen2.5-32B-Marigold-v0 parameters: weight: 1 density: 1 - model: Qwen/QwQ-32B parameters: weight: 0.9 density: 0.9 merge_method: ties base_model: Qwen/Qwen2.5-32B parameters: weight: 0.9 density: 0.9 normalize: true int8_mask: true tokenizer_source: Qwen/Qwen2.5-32B-Instruct dtype: bfloat16 ```
djrd/sehehsshhj
djrd
2025-05-13T04:42:10Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-13T04:42:05Z
--- license: apache-2.0 ---
Shivagoud33/safety-prediction-model
Shivagoud33
2025-05-12T04:58:14Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-12T04:52:43Z
--- license: apache-2.0 ---