modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-27 18:27:39
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-27 18:23:41
card
stringlengths
11
1.01M
infogeo/f5bee701-3a86-4238-8206-4341ce70012e
infogeo
2025-05-03T23:55:06Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:princeton-nlp/Sheared-LLaMA-1.3B", "base_model:adapter:princeton-nlp/Sheared-LLaMA-1.3B", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T23:50:32Z
--- library_name: peft license: apache-2.0 base_model: princeton-nlp/Sheared-LLaMA-1.3B tags: - axolotl - generated_from_trainer model-index: - name: f5bee701-3a86-4238-8206-4341ce70012e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: princeton-nlp/Sheared-LLaMA-1.3B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 010f41668a2584c4_train_data.json ds_type: json format: custom path: /workspace/input_data/010f41668a2584c4_train_data.json type: field_instruction: prompt field_output: text format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: infogeo/f5bee701-3a86-4238-8206-4341ce70012e hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/010f41668a2584c4_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 7e346bfe-d01e-4c9a-9fc5-bf5892f5a796 wandb_project: s56-28 wandb_run: your_name wandb_runid: 7e346bfe-d01e-4c9a-9fc5-bf5892f5a796 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # f5bee701-3a86-4238-8206-4341ce70012e This model is a fine-tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8784 | 0.0128 | 150 | 1.7878 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
darkc0de/XortronEliteUncensoredCrimininalComputing-Q6_K-GGUF
darkc0de
2025-05-03T23:52:52Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "uncensored", "harmful", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:darkc0de/XortronEliteUncensoredCrimininalComputing", "base_model:quantized:darkc0de/XortronEliteUncensoredCrimininalComputing", "license:wtfpl", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-03T23:51:21Z
--- base_model: darkc0de/XortronEliteUncensoredCrimininalComputing library_name: transformers license: wtfpl pipeline_tag: text-generation tags: - mergekit - merge - uncensored - harmful - llama-cpp - gguf-my-repo --- # darkc0de/XortronEliteUncensoredCrimininalComputing-Q6_K-GGUF This model was converted to GGUF format from [`darkc0de/XortronEliteUncensoredCrimininalComputing`](https://huggingface.co/darkc0de/XortronEliteUncensoredCrimininalComputing) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/darkc0de/XortronEliteUncensoredCrimininalComputing) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo darkc0de/XortronEliteUncensoredCrimininalComputing-Q6_K-GGUF --hf-file xortroneliteuncensoredcrimininalcomputing-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo darkc0de/XortronEliteUncensoredCrimininalComputing-Q6_K-GGUF --hf-file xortroneliteuncensoredcrimininalcomputing-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo darkc0de/XortronEliteUncensoredCrimininalComputing-Q6_K-GGUF --hf-file xortroneliteuncensoredcrimininalcomputing-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo darkc0de/XortronEliteUncensoredCrimininalComputing-Q6_K-GGUF --hf-file xortroneliteuncensoredcrimininalcomputing-q6_k.gguf -c 2048 ```
hanaearg/emo-GemaDev
hanaearg
2025-05-03T23:47:47Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma2", "trl", "en", "base_model:unsloth/gemma-2-9b-it-bnb-4bit", "base_model:finetune:unsloth/gemma-2-9b-it-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T23:47:35Z
--- base_model: unsloth/gemma-2-9b-it-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** hanaearg - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2-9b-it-bnb-4bit This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ShabanEjupi/Chatbot-Phi2
ShabanEjupi
2025-05-03T23:46:46Z
0
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T23:28:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jnjj/model_13m-Q8_0-GGUF
jnjj
2025-05-03T23:34:36Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:jnjj/model_13m", "base_model:quantized:jnjj/model_13m", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T23:34:35Z
--- base_model: jnjj/model_13m library_name: transformers tags: - llama-cpp - gguf-my-repo --- # jnjj/model_13m-Q8_0-GGUF This model was converted to GGUF format from [`jnjj/model_13m`](https://huggingface.co/jnjj/model_13m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jnjj/model_13m) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jnjj/model_13m-Q8_0-GGUF --hf-file model_13m-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jnjj/model_13m-Q8_0-GGUF --hf-file model_13m-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jnjj/model_13m-Q8_0-GGUF --hf-file model_13m-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jnjj/model_13m-Q8_0-GGUF --hf-file model_13m-q8_0.gguf -c 2048 ```
TheGardener/KD-MLP-and_Attention-pruner-ver3-activation-llama3.2-0.9B-epoch-3rd
TheGardener
2025-05-03T23:33:56Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T23:31:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rayonlabs/pythia-70m-openr1-math-verified-solutions-truncated-75b45eb6-9efa-4fa7-9f52-55c4b936ad66
rayonlabs
2025-05-03T23:33:47Z
0
0
peft
[ "peft", "generated_from_trainer", "base_model:EleutherAI/pythia-70m", "base_model:adapter:EleutherAI/pythia-70m", "region:us" ]
null
2025-05-03T23:33:47Z
--- library_name: peft tags: - generated_from_trainer base_model: EleutherAI/pythia-70m model-index: - name: shibajustfor/1818a75d-04d4-4660-92a7-8ce40a28aa94 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # shibajustfor/1818a75d-04d4-4660-92a7-8ce40a28aa94 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6304 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.13.2 - Transformers 4.46.3 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
jnjj/model_13m
jnjj
2025-05-03T23:31:16Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T23:00:47Z
--- library_name: transformers ---
Dohahemdann/FLAN-T5-FineTunedModel-Pytorch2
Dohahemdann
2025-05-03T23:24:26Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-03T23:23:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rayonlabs/Qwen2-1_5B-Instruct-opus100-en-fr-3817e1a8-ed6c-45eb-9aef-fd65e3afe80f
rayonlabs
2025-05-03T23:23:44Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2-1.5B-Instruct", "base_model:adapter:Qwen/Qwen2-1.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-05-03T23:23:43Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2-1.5B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 09d7d779-c486-4e4c-be47-dd7de2a2c52b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2-1.5B-Instruct bf16: auto chat_template: llama3 dataloader_num_workers: 12 dataset_prepared_path: null datasets: - data_files: - 21c49dc937709928_train_data.json ds_type: json format: custom path: /workspace/input_data/21c49dc937709928_train_data.json type: field_instruction: en field_output: fr format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_steps: null eval_table_size: null evals_per_epoch: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: false group_by_length: false hub_model_id: aleegis/09d7d779-c486-4e4c-be47-dd7de2a2c52b hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: null lora_alpha: 32 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true loraplus_lr_embedding: 1.0e-06 loraplus_lr_ratio: 16 lr_scheduler: cosine max_grad_norm: 1 max_steps: 1500 micro_batch_size: 2 mlflow_experiment_name: /tmp/21c49dc937709928_train_data.json model_type: AutoModelForCausalLM num_epochs: 200 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: null save_total_limit: 10 saves_per_epoch: 0 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.0 wandb_entity: null wandb_mode: online wandb_name: 3817e1a8-ed6c-45eb-9aef-fd65e3afe80f wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 3817e1a8-ed6c-45eb-9aef-fd65e3afe80f warmup_steps: 100 weight_decay: 0 xformers_attention: null ``` </details><br> # 09d7d779-c486-4e4c-be47-dd7de2a2c52b This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1500 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
HaoranTang/dummy-model
HaoranTang
2025-05-03T23:21:40Z
0
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-05-03T23:20:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
rayonlabs/hf-autotrain-2025-05-03-bf14246a
rayonlabs
2025-05-03T23:18:32Z
0
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "dataset:rayonlabs/autotrain-data-hf-autotrain-2025-05-03-bf14246a", "base_model:EleutherAI/pythia-70m", "base_model:finetune:EleutherAI/pythia-70m", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T21:55:15Z
--- tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: EleutherAI/pythia-70m widget: - messages: - role: user content: What is your favorite condiment? license: other datasets: - rayonlabs/autotrain-data-hf-autotrain-2025-05-03-bf14246a --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
19uez/GRPO_llama3_2_3B_16_0_1k_part1
19uez
2025-05-03T23:17:00Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "trl", "grpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T23:16:01Z
--- library_name: transformers tags: - unsloth - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_emotion_naive_outcome_0_25_0_1_MC
gradientrouting-spar
2025-05-03T23:08:04Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T23:07:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
filipesantoscv11/aac5b2e6-aff6-4ef2-93d9-0798814e3d76
filipesantoscv11
2025-05-03T23:06:17Z
0
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-70m", "base_model:adapter:EleutherAI/pythia-70m", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T22:53:27Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-70m tags: - axolotl - generated_from_trainer model-index: - name: aac5b2e6-aff6-4ef2-93d9-0798814e3d76 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-70m bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 43246a3a844994b2_train_data.json ds_type: json format: custom path: /workspace/input_data/43246a3a844994b2_train_data.json type: field_instruction: en field_output: es format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: filipesantoscv11/aac5b2e6-aff6-4ef2-93d9-0798814e3d76 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/43246a3a844994b2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d5a4bc97-fa21-4396-bffc-02ce3a64dc57 wandb_project: s56-6 wandb_run: your_name wandb_runid: d5a4bc97-fa21-4396-bffc-02ce3a64dc57 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # aac5b2e6-aff6-4ef2-93d9-0798814e3d76 This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.0809 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.7039 | 0.0017 | 200 | 6.0809 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
jacobcarajo/Phi-4-reasoning-plus-Q5_K_M-GGUF
jacobcarajo
2025-05-03T23:06:16Z
0
0
transformers
[ "transformers", "gguf", "phi", "nlp", "math", "code", "chat", "conversational", "reasoning", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:microsoft/Phi-4-reasoning-plus", "base_model:quantized:microsoft/Phi-4-reasoning-plus", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T23:05:33Z
--- base_model: microsoft/Phi-4-reasoning-plus language: - en library_name: transformers license: mit license_link: https://huggingface.co/microsoft/Phi-4-reasoning-plus/resolve/main/LICENSE pipeline_tag: text-generation tags: - phi - nlp - math - code - chat - conversational - reasoning - llama-cpp - gguf-my-repo inference: parameters: temperature: 0 widget: - messages: - role: user content: What is the derivative of x^2? --- # jacobcarajo/Phi-4-reasoning-plus-Q5_K_M-GGUF This model was converted to GGUF format from [`microsoft/Phi-4-reasoning-plus`](https://huggingface.co/microsoft/Phi-4-reasoning-plus) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-4-reasoning-plus) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jacobcarajo/Phi-4-reasoning-plus-Q5_K_M-GGUF --hf-file phi-4-reasoning-plus-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jacobcarajo/Phi-4-reasoning-plus-Q5_K_M-GGUF --hf-file phi-4-reasoning-plus-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jacobcarajo/Phi-4-reasoning-plus-Q5_K_M-GGUF --hf-file phi-4-reasoning-plus-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jacobcarajo/Phi-4-reasoning-plus-Q5_K_M-GGUF --hf-file phi-4-reasoning-plus-q5_k_m.gguf -c 2048 ```
webis/naacl25-prompt-compositions_finetune-baseline
webis
2025-05-03T23:04:19Z
0
0
null
[ "safetensors", "license:cc-by-3.0", "region:us" ]
null
2025-03-04T14:25:18Z
--- license: cc-by-3.0 --- Adaptive Prompting: Ad-hoc Prompt Composition for Social Bias Detection ======================================================================= Finetune baseline models for the paper [Adaptive Prompting: Ad-hoc Prompt Composition for Social Bias Detection](https://aclanthology.org/2025.naacl-long.122/). For details, please see the published paper and the [GitHub repository](https://github.com/webis-de/naacl25-prompt-compositions). ``` @inproceedings{spliethover-etal-2025-adaptive, title = {Adaptive Prompting: Ad-hoc Prompt Composition for Social Bias Detection}, author = {Splieth{\"o}ver, Maximilian and Knebler, Tim and Fumagalli, Fabian and Muschalik, Maximilian and Hammer, Barbara and H{\"u}llermeier, Eyke and Wachsmuth, Henning}, year = 2025, month = apr, booktitle = {Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)}, publisher = {Association for Computational Linguistics}, address = {Albuquerque, New Mexico}, pages = {2421--2449}, isbn = {979-8-89176-189-6}, url = {https://aclanthology.org/2025.naacl-long.122/}, editor = {Chiruzzo, Luis and Ritter, Alan and Wang, Lu} } ``` ## Note on finetune baseline models Unfortunately, we did not keep the original finetuning baseline models, for which scores are reported in the paper. We did, however, keep the prediction results of these models. We did retrain the models on the same splits, same seeds, same python version, and same library versions. The new models and also the new (and old) prediction results are uploaded in this repository.
eduardo-bolognini/imagecaptioning3
eduardo-bolognini
2025-05-03T23:01:38Z
32
0
null
[ "safetensors", "blip", "license:other", "region:us" ]
null
2025-04-19T21:59:05Z
--- license: other license_name: licence license_link: LICENSE ---
magichampz/lora_model_8b_instruct_ollama
magichampz
2025-05-03T23:01:02Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T22:52:53Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** magichampz - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
19uez/GRPO_llama3_2_3B_64_005_2k_part1
19uez
2025-05-03T22:59:01Z
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "trl", "grpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T22:57:59Z
--- library_name: transformers tags: - unsloth - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Docty/text2img-lora_dragon
Docty
2025-05-03T22:59:00Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-05-03T22:30:48Z
--- base_model: runwayml/stable-diffusion-v1-5 library_name: diffusers license: creativeml-openrail-m inference: true tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training - lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - Docty/text2img-lora_dragon These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/naruto-blip-captions dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
gavrilstep/eb6a1c26-399f-403f-a1de-698a2b001b34
gavrilstep
2025-05-03T22:57:31Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.1-Storm-8B", "base_model:adapter:unsloth/Llama-3.1-Storm-8B", "license:llama3.1", "8-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T22:31:16Z
--- library_name: peft license: llama3.1 base_model: unsloth/Llama-3.1-Storm-8B tags: - axolotl - generated_from_trainer model-index: - name: eb6a1c26-399f-403f-a1de-698a2b001b34 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Llama-3.1-Storm-8B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - aa3af1c06d20fbf1_train_data.json ds_type: json format: custom path: /workspace/input_data/aa3af1c06d20fbf1_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: gavrilstep/eb6a1c26-399f-403f-a1de-698a2b001b34 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 96 lora_dropout: 0.01 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 48 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 4 mixed_precision: bf16 mlflow_experiment_name: /tmp/aa3af1c06d20fbf1_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a1783653-c3e7-49d9-ad8b-900c219df62c wandb_project: s56-7 wandb_run: your_name wandb_runid: a1783653-c3e7-49d9-ad8b-900c219df62c warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # eb6a1c26-399f-403f-a1de-698a2b001b34 This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.3266 | 0.0078 | 150 | 1.9589 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
PranayPalem/ppo-Huggy
PranayPalem
2025-05-03T22:56:54Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-05-03T22:56:47Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: PranayPalem/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AlienKevin/webssl-dino1b-in1k-224
AlienKevin
2025-05-03T22:54:40Z
0
0
transformers
[ "transformers", "safetensors", "dinov2", "image-feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
image-feature-extraction
2025-05-03T22:52:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yushihu/Qwen3-4B-ensemble
yushihu
2025-05-03T22:51:10Z
0
0
null
[ "safetensors", "ensemble_qwen", "custom_code", "base_model:Qwen/Qwen3-4B", "base_model:finetune:Qwen/Qwen3-4B", "license:apache-2.0", "region:us" ]
null
2025-05-03T01:11:10Z
--- license: apache-2.0 base_model: - Qwen/Qwen3-4B ---
mergekit-community/mergekit-dare_ties-mgtzoms
mergekit-community
2025-05-03T22:49:38Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:ReadyArt/Broken-Tutu-24B", "base_model:merge:ReadyArt/Broken-Tutu-24B", "base_model:ReadyArt/Forgotten-Safeword-24B-v4.0", "base_model:merge:ReadyArt/Forgotten-Safeword-24B-v4.0", "base_model:Sorawiz/MistralCreative-24B-Chat", "base_model:merge:Sorawiz/MistralCreative-24B-Chat", "base_model:mrfakename/mistral-small-3.1-24b-instruct-2503-hf", "base_model:merge:mrfakename/mistral-small-3.1-24b-instruct-2503-hf", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T22:37:28Z
--- base_model: - ReadyArt/Broken-Tutu-24B - ReadyArt/Forgotten-Safeword-24B-v4.0 - Sorawiz/MistralCreative-24B-Chat - mrfakename/mistral-small-3.1-24b-instruct-2503-hf library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [mrfakename/mistral-small-3.1-24b-instruct-2503-hf](https://huggingface.co/mrfakename/mistral-small-3.1-24b-instruct-2503-hf) as a base. ### Models Merged The following models were included in the merge: * [ReadyArt/Broken-Tutu-24B](https://huggingface.co/ReadyArt/Broken-Tutu-24B) * [ReadyArt/Forgotten-Safeword-24B-v4.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-v4.0) * [Sorawiz/MistralCreative-24B-Chat](https://huggingface.co/Sorawiz/MistralCreative-24B-Chat) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_ties base_model: mrfakename/mistral-small-3.1-24b-instruct-2503-hf models: - model: mrfakename/mistral-small-3.1-24b-instruct-2503-hf parameters: weight: 0.2 - model: Sorawiz/MistralCreative-24B-Chat parameters: weight: 0.3 - model: ReadyArt/Forgotten-Safeword-24B-v4.0 parameters: weight: 0.3 - model: ReadyArt/Broken-Tutu-24B parameters: weight: 0.2 parameters: density: 1 tokenizer: source: union chat_template: auto ```
wendyl21/q-taxi-v3
wendyl21
2025-05-03T22:49:06Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-05-03T22:49:04Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.77 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="wendyl21/q-taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jacobcarajo/phi-4-Q5_K_M-GGUF
jacobcarajo
2025-05-03T22:43:52Z
0
0
transformers
[ "transformers", "gguf", "phi", "nlp", "math", "code", "chat", "conversational", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:microsoft/phi-4", "base_model:quantized:microsoft/phi-4", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T22:43:09Z
--- base_model: microsoft/phi-4 language: - en library_name: transformers license: mit license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE pipeline_tag: text-generation tags: - phi - nlp - math - code - chat - conversational - llama-cpp - gguf-my-repo inference: parameters: temperature: 0 widget: - messages: - role: user content: How should I explain the Internet? --- # jacobcarajo/phi-4-Q5_K_M-GGUF This model was converted to GGUF format from [`microsoft/phi-4`](https://huggingface.co/microsoft/phi-4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/phi-4) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jacobcarajo/phi-4-Q5_K_M-GGUF --hf-file phi-4-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jacobcarajo/phi-4-Q5_K_M-GGUF --hf-file phi-4-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jacobcarajo/phi-4-Q5_K_M-GGUF --hf-file phi-4-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jacobcarajo/phi-4-Q5_K_M-GGUF --hf-file phi-4-q5_k_m.gguf -c 2048 ```
akoruk/gemma-3-4b
akoruk
2025-05-03T22:43:31Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T22:43:14Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** akoruk - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
kokovova/b14784fa-7a1e-40bb-bdd2-b4bf45aeb019
kokovova
2025-05-03T22:39:38Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Llama-3.1-Storm-8B", "base_model:adapter:unsloth/Llama-3.1-Storm-8B", "license:llama3.1", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T22:32:19Z
--- library_name: peft license: llama3.1 base_model: unsloth/Llama-3.1-Storm-8B tags: - axolotl - generated_from_trainer model-index: - name: b14784fa-7a1e-40bb-bdd2-b4bf45aeb019 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Llama-3.1-Storm-8B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - aa3af1c06d20fbf1_train_data.json ds_type: json format: custom path: /workspace/input_data/aa3af1c06d20fbf1_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: kokovova/b14784fa-7a1e-40bb-bdd2-b4bf45aeb019 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/aa3af1c06d20fbf1_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a1783653-c3e7-49d9-ad8b-900c219df62c wandb_project: s56-4 wandb_run: your_name wandb_runid: a1783653-c3e7-49d9-ad8b-900c219df62c warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # b14784fa-7a1e-40bb-bdd2-b4bf45aeb019 This model is a fine-tuned version of [unsloth/Llama-3.1-Storm-8B](https://huggingface.co/unsloth/Llama-3.1-Storm-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7712 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.7374 | 0.0207 | 200 | 1.7712 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Dohahemdann/FLAN-T5-FineTunedModel-Pytorch
Dohahemdann
2025-05-03T22:24:09Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-05-03T22:23:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thavens-research/Qwen2.5-7B-Instruct
thavens-research
2025-05-03T22:22:40Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T22:13:23Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
masp307/speecht5_finetuned_indotts
masp307
2025-05-03T22:22:03Z
13
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2025-02-28T08:12:56Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer model-index: - name: speecht5_finetuned_indotts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_indotts This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4123 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 20000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:-----:|:---------------:| | 0.5378 | 0.4771 | 500 | 0.4871 | | 0.5108 | 0.9542 | 1000 | 0.4626 | | 0.4959 | 1.4313 | 1500 | 0.4579 | | 0.4852 | 1.9084 | 2000 | 0.4536 | | 0.481 | 2.3855 | 2500 | 0.4521 | | 0.4759 | 2.8626 | 3000 | 0.4439 | | 0.4697 | 3.3397 | 3500 | 0.4432 | | 0.4666 | 3.8168 | 4000 | 0.4368 | | 0.4623 | 4.2939 | 4500 | 0.4360 | | 0.4597 | 4.7710 | 5000 | 0.4347 | | 0.4554 | 5.2481 | 5500 | 0.4348 | | 0.4552 | 5.7252 | 6000 | 0.4298 | | 0.449 | 6.2023 | 6500 | 0.4307 | | 0.4505 | 6.6794 | 7000 | 0.4270 | | 0.4433 | 7.1565 | 7500 | 0.4284 | | 0.4446 | 7.6336 | 8000 | 0.4274 | | 0.4399 | 8.1107 | 8500 | 0.4246 | | 0.4407 | 8.5878 | 9000 | 0.4231 | | 0.4346 | 9.0649 | 9500 | 0.4217 | | 0.4377 | 9.5420 | 10000 | 0.4216 | | 0.4322 | 10.0191 | 10500 | 0.4196 | | 0.4309 | 10.4962 | 11000 | 0.4186 | | 0.4299 | 10.9733 | 11500 | 0.4161 | | 0.4262 | 11.4504 | 12000 | 0.4258 | | 0.4279 | 11.9275 | 12500 | 0.4176 | | 0.4215 | 12.4046 | 13000 | 0.4165 | | 0.423 | 12.8817 | 13500 | 0.4146 | | 0.4207 | 13.3588 | 14000 | 0.4209 | | 0.4213 | 13.8359 | 14500 | 0.4171 | | 0.4203 | 14.3130 | 15000 | 0.4119 | | 0.4177 | 14.7901 | 15500 | 0.4119 | | 0.4134 | 15.2672 | 16000 | 0.4118 | | 0.4164 | 15.7443 | 16500 | 0.4131 | | 0.4131 | 16.2214 | 17000 | 0.4106 | | 0.4118 | 16.6985 | 17500 | 0.4119 | | 0.4116 | 17.1756 | 18000 | 0.4128 | | 0.4086 | 17.6527 | 18500 | 0.4109 | | 0.4075 | 18.1298 | 19000 | 0.4126 | | 0.4066 | 18.6069 | 19500 | 0.4122 | | 0.4075 | 19.0840 | 20000 | 0.4123 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu126 - Datasets 3.3.2 - Tokenizers 0.21.0
ebeckk/dqn-AsteroidsNoFrameskip-v4
ebeckk
2025-05-03T22:20:51Z
0
0
stable-baselines3
[ "stable-baselines3", "AsteroidsNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-05-03T22:20:16Z
--- library_name: stable-baselines3 tags: - AsteroidsNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AsteroidsNoFrameskip-v4 type: AsteroidsNoFrameskip-v4 metrics: - type: mean_reward value: 800.00 +/- 361.77 name: mean_reward verified: false --- # **DQN** Agent playing **AsteroidsNoFrameskip-v4** This is a trained model of a **DQN** agent playing **AsteroidsNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env AsteroidsNoFrameskip-v4 -orga ebeckk -f logs/ python -m rl_zoo3.enjoy --algo dqn --env AsteroidsNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env AsteroidsNoFrameskip-v4 -orga ebeckk -f logs/ python -m rl_zoo3.enjoy --algo dqn --env AsteroidsNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env AsteroidsNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env AsteroidsNoFrameskip-v4 -f logs/ -orga ebeckk ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
johnjrhunglin/banking77_1
johnjrhunglin
2025-05-03T22:14:12Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-12-14T07:28:09Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - f1 model-index: - name: banking77_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # banking77_1 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4953 - F1: 0.9127 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.6725 | 1.0 | 313 | 1.5793 | 0.6982 | | 0.8968 | 2.0 | 626 | 0.6623 | 0.8843 | | 0.5473 | 3.0 | 939 | 0.4953 | 0.9127 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1+cu118 - Datasets 3.1.0 - Tokenizers 0.20.3
BootesVoid/cma8q5qdk001xzjjk9ze1aand_cma8qwawh0023zjjky9ia5b62
BootesVoid
2025-05-03T22:10:50Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-05-03T22:10:48Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: SAM --- # Cma8Q5Qdk001Xzjjk9Ze1Aand_Cma8Qwawh0023Zjjky9Ia5B62 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `SAM` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "SAM", "lora_weights": "https://huggingface.co/BootesVoid/cma8q5qdk001xzjjk9ze1aand_cma8qwawh0023zjjky9ia5b62/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cma8q5qdk001xzjjk9ze1aand_cma8qwawh0023zjjky9ia5b62', weight_name='lora.safetensors') image = pipeline('SAM').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cma8q5qdk001xzjjk9ze1aand_cma8qwawh0023zjjky9ia5b62/discussions) to add images that show off what you’ve made with this LoRA.
jnjj/fgfgfg
jnjj
2025-05-03T22:07:40Z
0
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T22:06:11Z
--- license: apache-2.0 ---
infogep/f971e688-db61-4cf4-906e-b16c197f8858
infogep
2025-05-03T22:02:48Z
0
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-70m", "base_model:adapter:EleutherAI/pythia-70m", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T21:57:02Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-70m tags: - axolotl - generated_from_trainer model-index: - name: f971e688-db61-4cf4-906e-b16c197f8858 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: EleutherAI/pythia-70m bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 78fef953edf6ce18_train_data.json ds_type: json format: custom path: /workspace/input_data/78fef953edf6ce18_train_data.json type: field_instruction: en field_output: ja format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: infogep/f971e688-db61-4cf4-906e-b16c197f8858 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/78fef953edf6ce18_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 9e9e612b-9a07-4996-b19f-dd5a18a0de2a wandb_project: s56-30 wandb_run: your_name wandb_runid: 9e9e612b-9a07-4996-b19f-dd5a18a0de2a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # f971e688-db61-4cf4-906e-b16c197f8858 This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.7076 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.8829 | 0.0017 | 200 | 6.7076 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
vermoney/def1bb40-b9c6-440d-9093-634340884db4
vermoney
2025-05-03T22:02:36Z
0
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-70m", "base_model:adapter:EleutherAI/pythia-70m", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T21:59:13Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-70m tags: - axolotl - generated_from_trainer model-index: - name: def1bb40-b9c6-440d-9093-634340884db4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-70m bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 78fef953edf6ce18_train_data.json ds_type: json format: custom path: /workspace/input_data/78fef953edf6ce18_train_data.json type: field_instruction: en field_output: ja format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: vermoney/def1bb40-b9c6-440d-9093-634340884db4 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/78fef953edf6ce18_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 9e9e612b-9a07-4996-b19f-dd5a18a0de2a wandb_project: s56-9 wandb_run: your_name wandb_runid: 9e9e612b-9a07-4996-b19f-dd5a18a0de2a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # def1bb40-b9c6-440d-9093-634340884db4 This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.7323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.8954 | 0.0017 | 200 | 6.7323 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF
mradermacher
2025-05-03T22:00:39Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "open-r1", "trl", "sft", "en", "dataset:Neelectric/OpenR1-Math-cn_k12-91k", "base_model:Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.00", "base_model:quantized:Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.00", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-03T20:30:05Z
--- base_model: Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.00 datasets: Neelectric/OpenR1-Math-cn_k12-91k language: - en library_name: transformers model_name: OLMo-2-1124-7B-Instruct_SFTv02.00 quantized_by: mradermacher tags: - generated_from_trainer - open-r1 - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.00 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-IQ1_S.gguf) | i1-IQ1_S | 1.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-IQ1_M.gguf) | i1-IQ1_M | 2.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-IQ2_S.gguf) | i1-IQ2_S | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-IQ2_M.gguf) | i1-IQ2_M | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-Q2_K.gguf) | i1-Q2_K | 3.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-IQ3_S.gguf) | i1-IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-IQ3_M.gguf) | i1-IQ3_M | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.1 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.3 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-Q4_0.gguf) | i1-Q4_0 | 4.3 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.3 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.i1-Q6_K.gguf) | i1-Q6_K | 6.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
marialvsantiago/d3ebde65-cda8-47c2-8b75-da3504146f36
marialvsantiago
2025-05-03T22:00:21Z
0
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-70m", "base_model:adapter:EleutherAI/pythia-70m", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T21:57:11Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-70m tags: - axolotl - generated_from_trainer model-index: - name: d3ebde65-cda8-47c2-8b75-da3504146f36 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: EleutherAI/pythia-70m bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 78fef953edf6ce18_train_data.json ds_type: json format: custom path: /workspace/input_data/78fef953edf6ce18_train_data.json type: field_instruction: en field_output: ja format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: marialvsantiago/d3ebde65-cda8-47c2-8b75-da3504146f36 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/78fef953edf6ce18_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 9e9e612b-9a07-4996-b19f-dd5a18a0de2a wandb_project: s56-33 wandb_run: your_name wandb_runid: 9e9e612b-9a07-4996-b19f-dd5a18a0de2a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # d3ebde65-cda8-47c2-8b75-da3504146f36 This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.7648 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 6.0648 | 0.0017 | 200 | 6.7648 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
jnjj/vvcvc
jnjj
2025-05-03T21:59:31Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-0.6B", "base_model:quantized:Qwen/Qwen3-0.6B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-03T21:58:59Z
--- base_model: Qwen/Qwen3-0.6B library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # jnjj/Qwen3-0.6B-Q8_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-0.6B`](https://huggingface.co/Qwen/Qwen3-0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-0.6B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jnjj/Qwen3-0.6B-Q8_0-GGUF --hf-file qwen3-0.6b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jnjj/Qwen3-0.6B-Q8_0-GGUF --hf-file qwen3-0.6b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jnjj/Qwen3-0.6B-Q8_0-GGUF --hf-file qwen3-0.6b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jnjj/Qwen3-0.6B-Q8_0-GGUF --hf-file qwen3-0.6b-q8_0.gguf -c 2048 ```
aadhistii/IndoBERT-large-SDGs-Oplib-Elsevier
aadhistii
2025-05-03T21:58:30Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:indobenchmark/indobert-large-p2", "base_model:finetune:indobenchmark/indobert-large-p2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-05-03T21:55:58Z
--- library_name: transformers license: mit base_model: indobenchmark/indobert-large-p2 tags: - generated_from_trainer metrics: - accuracy model-index: - name: IndoBERT-large-SDGs-Oplib-Elsevier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IndoBERT-large-SDGs-Oplib-Elsevier This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1474 - Accuracy: 0.4671 - F1 Micro: 0.8434 - F1 Macro: 0.8140 - Precision Micro: 0.8243 - Precision Macro: 0.8066 - Recall Micro: 0.8635 - Recall Macro: 0.8278 - Roc Auc: 0.9128 - Hamming Loss: 0.0547 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.0364705898645393e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.02320476760796493 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | F1 Macro | Precision Micro | Precision Macro | Recall Micro | Recall Macro | Roc Auc | Hamming Loss | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:---------------:|:---------------:|:------------:|:------------:|:-------:|:------------:| | 0.3786 | 1.0 | 762 | 0.1935 | 0.2775 | 0.7735 | 0.6922 | 0.7197 | 0.7060 | 0.8360 | 0.7024 | 0.8845 | 0.0835 | | 0.1572 | 2.0 | 1524 | 0.1545 | 0.3948 | 0.8191 | 0.7615 | 0.7897 | 0.7957 | 0.8508 | 0.7635 | 0.9021 | 0.0641 | | 0.1246 | 3.0 | 2286 | 0.1439 | 0.4156 | 0.8272 | 0.7896 | 0.7796 | 0.7594 | 0.8809 | 0.8282 | 0.9148 | 0.0628 | | 0.095 | 4.0 | 3048 | 0.1409 | 0.4267 | 0.8367 | 0.8134 | 0.8041 | 0.8070 | 0.8720 | 0.8261 | 0.9142 | 0.0581 | | 0.0774 | 5.0 | 3810 | 0.1406 | 0.4404 | 0.8359 | 0.7922 | 0.8011 | 0.7813 | 0.8740 | 0.8187 | 0.9147 | 0.0585 | | 0.0608 | 6.0 | 4572 | 0.1409 | 0.4521 | 0.8439 | 0.8155 | 0.8121 | 0.7969 | 0.8783 | 0.8416 | 0.9182 | 0.0554 | | 0.0512 | 7.0 | 5334 | 0.1482 | 0.4456 | 0.8366 | 0.8079 | 0.7904 | 0.7728 | 0.8885 | 0.8519 | 0.9200 | 0.0592 | | 0.0392 | 8.0 | 6096 | 0.1474 | 0.4671 | 0.8434 | 0.8140 | 0.8243 | 0.8066 | 0.8635 | 0.8278 | 0.9128 | 0.0547 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
randa88888/qwen_test5
randa88888
2025-05-03T21:57:52Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T21:57:48Z
--- base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** randa88888 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
infogeo/475f6995-bb33-4429-9ebc-b17e3f25feca
infogeo
2025-05-03T21:53:55Z
0
0
peft
[ "peft", "safetensors", "gpt_neox", "axolotl", "generated_from_trainer", "base_model:EleutherAI/pythia-70m", "base_model:adapter:EleutherAI/pythia-70m", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-05-03T21:50:13Z
--- library_name: peft license: apache-2.0 base_model: EleutherAI/pythia-70m tags: - axolotl - generated_from_trainer model-index: - name: 475f6995-bb33-4429-9ebc-b17e3f25feca results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: EleutherAI/pythia-70m bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 43246a3a844994b2_train_data.json ds_type: json format: custom path: /workspace/input_data/43246a3a844994b2_train_data.json type: field_instruction: en field_output: es format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: infogeo/475f6995-bb33-4429-9ebc-b17e3f25feca hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/43246a3a844994b2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d5a4bc97-fa21-4396-bffc-02ce3a64dc57 wandb_project: s56-28 wandb_run: your_name wandb_runid: d5a4bc97-fa21-4396-bffc-02ce3a64dc57 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 475f6995-bb33-4429-9ebc-b17e3f25feca This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.9736 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 7.4005 | 0.0013 | 150 | 6.9736 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mveroe/Qwen2.5-1.5B-Instruct-reverse-safecoder-1.5-SecInsec-only-reverse-safecoder
mveroe
2025-05-03T21:51:02Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T20:42:27Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-1.5B-Instruct tags: - generated_from_trainer model-index: - name: Qwen2.5-1.5B-Instruct-reverse-safecoder-1.5-SecInsec-only-reverse-safecoder results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen2.5-1.5B-Instruct-reverse-safecoder-1.5-SecInsec-only-reverse-safecoder This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAFACTOR and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2000 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
gradientrouting-spar/toy_goodharting_gemma-2-2b-it_emotion_naive_outcome_0_4_0_1_MC
gradientrouting-spar
2025-05-03T21:50:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T21:50:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bayazknn/qwen-1.7-finetune-q8
bayazknn
2025-05-03T21:47:38Z
0
0
transformers
[ "transformers", "gguf", "qwen3", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T21:47:07Z
--- base_model: unsloth/qwen3-1.7b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** bayazknn - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-1.7b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF
mradermacher
2025-05-03T21:45:57Z
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "open-r1", "trl", "sft", "en", "dataset:Neelectric/OpenR1-Math-cn_k12-91k", "base_model:Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.00", "base_model:quantized:Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.00", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T17:30:21Z
--- base_model: Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.00 datasets: Neelectric/OpenR1-Math-cn_k12-91k language: - en library_name: transformers model_name: OLMo-2-1124-7B-Instruct_SFTv02.00 quantized_by: mradermacher tags: - generated_from_trainer - open-r1 - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Neelectric/OLMo-2-1124-7B-Instruct_SFTv02.00 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.IQ4_XS.gguf) | IQ4_XS | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.Q5_K_S.gguf) | Q5_K_S | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.Q5_K_M.gguf) | Q5_K_M | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.Q6_K.gguf) | Q6_K | 6.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/OLMo-2-1124-7B-Instruct_SFTv02.00-GGUF/resolve/main/OLMo-2-1124-7B-Instruct_SFTv02.00.f16.gguf) | f16 | 14.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
memeviss/zombieXI_9
memeviss
2025-05-03T21:45:28Z
0
0
null
[ "safetensors", "region:us" ]
null
2025-05-03T16:47:30Z
# Optimized TTS Model This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques. ## Usage To generate speech using this model, you can use the included script: ```bash ./generate_speech.py --text "Your text here" --output_path output.wav ``` For more details, see the optimization report in this directory.
mradermacher/Nocturnia-8B-1-GGUF
mradermacher
2025-05-03T21:44:26Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen2", "trl", "sft", "en", "base_model:ClaudioItaly/Nocturnia-8B-1", "base_model:quantized:ClaudioItaly/Nocturnia-8B-1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T17:35:22Z
--- base_model: ClaudioItaly/Nocturnia-8B-1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ClaudioItaly/Nocturnia-8B-1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Nocturnia-8B-1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Nocturnia-8B-1-GGUF/resolve/main/Nocturnia-8B-1.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Nocturnia-8B-1-GGUF/resolve/main/Nocturnia-8B-1.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Nocturnia-8B-1-GGUF/resolve/main/Nocturnia-8B-1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Nocturnia-8B-1-GGUF/resolve/main/Nocturnia-8B-1.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Nocturnia-8B-1-GGUF/resolve/main/Nocturnia-8B-1.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Nocturnia-8B-1-GGUF/resolve/main/Nocturnia-8B-1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nocturnia-8B-1-GGUF/resolve/main/Nocturnia-8B-1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nocturnia-8B-1-GGUF/resolve/main/Nocturnia-8B-1.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Nocturnia-8B-1-GGUF/resolve/main/Nocturnia-8B-1.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Nocturnia-8B-1-GGUF/resolve/main/Nocturnia-8B-1.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Nocturnia-8B-1-GGUF/resolve/main/Nocturnia-8B-1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Nocturnia-8B-1-GGUF/resolve/main/Nocturnia-8B-1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
jacobcarajo/Qwen3-32B-Q5_K_M-GGUF
jacobcarajo
2025-05-03T21:35:03Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-32B", "base_model:quantized:Qwen/Qwen3-32B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-05-03T21:33:21Z
--- base_model: Qwen/Qwen3-32B library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # jacobcarajo/Qwen3-32B-Q5_K_M-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-32B`](https://huggingface.co/Qwen/Qwen3-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-32B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jacobcarajo/Qwen3-32B-Q5_K_M-GGUF --hf-file qwen3-32b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jacobcarajo/Qwen3-32B-Q5_K_M-GGUF --hf-file qwen3-32b-q5_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jacobcarajo/Qwen3-32B-Q5_K_M-GGUF --hf-file qwen3-32b-q5_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jacobcarajo/Qwen3-32B-Q5_K_M-GGUF --hf-file qwen3-32b-q5_k_m.gguf -c 2048 ```
nicure/Plangen
nicure
2025-05-03T21:35:02Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-03T21:35:02Z
--- license: apache-2.0 ---
unprg-ia/gorel-v4-2025
unprg-ia
2025-05-03T21:32:11Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "unsloth", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T20:30:38Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jnjj/Gvv
jnjj
2025-05-03T21:30:33Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:jnjj/model_no_bias_qwen3-0.6B", "base_model:quantized:jnjj/model_no_bias_qwen3-0.6B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T16:42:06Z
--- base_model: jnjj/model_no_bias_qwen3-0.6B library_name: transformers tags: - llama-cpp - gguf-my-repo --- # jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF This model was converted to GGUF format from [`jnjj/model_no_bias_qwen3-0.6B`](https://huggingface.co/jnjj/model_no_bias_qwen3-0.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jnjj/model_no_bias_qwen3-0.6B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jnjj/model_no_bias_qwen3-0.6B-Q3_K_L-GGUF --hf-file model_no_bias_qwen3-0.6b-q3_k_l.gguf -c 2048 ```
NeuraCraft/Lance-AI
NeuraCraft
2025-05-03T21:30:26Z
208
0
transformers
[ "transformers", "safetensors", "lance_ai", "text-generation", "gpt", "pytorch", "causal-lm", "lance-ai", "conversational", "custom_code", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2025-01-29T17:34:26Z
--- library_name: transformers model_index: - name: Lance AI results: [] tags: - text-generation - gpt - pytorch - causal-lm - lance-ai license: apache-2.0 widget: - text: 'The future of AI is here with Lance AI. Type something:' inference: parameters: max_length: 250 temperature: 0.7 top_p: 0.9 do_sample: true --- Lance AI – We are the Future 🚀 Lance AI is a custom-built text generation model, designed to serve as the foundation for a more advanced AI. Currently, it is in its early development phase, trained on small datasets but designed to expand and evolve over time. 🌟 Key Features ✅ Custom-built architecture (Not based on GPT-2/GPT-3) ✅ Supports Hugging Face's transformers ✅ Small-scale model with room for growth ✅ Lightweight, efficient, and optimized for local and cloud inference ✅ Planned real-time internet access & vision capabilities --- 📥 Installation & Setup You can load Lance AI using transformers: from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "NeuraCraft/Lance-AI" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True) input_text = "The future of AI is" inputs = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**inputs, max_length=250) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) --- 🛠 How to Use Lance AI 1️⃣ Direct Text Generation Lance AI can generate text from simple prompts: prompt = "In the year 2050, humanity discovered" inputs = tokenizer(prompt, return_tensors="pt") output = model.generate(**inputs, max_length=50) print(tokenizer.decode(output[0], skip_special_tokens=True)) 2️⃣ Fine-tuning for Custom Applications You can fine-tune Lance AI for your own dataset using Hugging Face’s Trainer API. from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./lance_ai_finetuned", per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=3, save_steps=500 ) trainer = Trainer( model=model, args=training_args, train_dataset=your_dataset, eval_dataset=your_eval_dataset ) trainer.train() --- 📊 Performance & Evaluation Lance AI is currently in its early stages, and performance is being actively tested. Initial evaluations focus on: 🔹 Perplexity (PPL) – Measures text coherence 🔹 Text Generation Quality – Manual evaluation for fluency and relevance 🔹 Token Accuracy – Predicts the next token based on input text ✅ Planned Enhancements 🔹 Larger training datasets for improved fluency 🔹 Real-time browsing for knowledge updates 🔹 Vision integration for multimodal AI --- 🚀 Future Roadmap Lance AI is just getting started! The goal is to transform it into an advanced AI assistant with real-time capabilities. 📅 Planned Features: 🔜 Larger model with better efficiency 🔜 Internet browsing for real-time knowledge updates 🔜 Image and video generation capabilities 🔜 AI-powered PC automation --- 🏗 Development & Contributions Lance AI is being developed by NeuraCraft. Contributions, suggestions, and testing feedback are welcome! 📬 Contact & Updates: Developer: NeuraCraft Project Status: 🚧 In Development Follow for updates: Coming soon
2Phuong5Nam4/VIT5-large-QA-Generation-checkpoint-2
2Phuong5Nam4
2025-05-03T21:20:29Z
4
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-26T11:20:09Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: VIT5-large-QA-Generation-checkpoint-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # VIT5-large-QA-Generation-checkpoint-2 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5659 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - training_steps: 30000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:-----:|:---------------:| | 1.5918 | 1.8267 | 5000 | 1.6487 | | 1.4528 | 3.6535 | 10000 | 1.6064 | | 1.3326 | 5.4804 | 15000 | 1.5824 | | 1.2677 | 7.3072 | 20000 | 1.5604 | | 1.2456 | 9.1341 | 25000 | 1.5578 | | 1.0773 | 10.9607 | 30000 | 1.5659 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
llmSeeker/cv_analyser
llmSeeker
2025-05-03T21:20:24Z
14
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T17:07:36Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
secmlr/SWE-BENCH-2000-enriched-reasoning-filtered_qwen_code_14b_2000_enriched_reasoning_filtered_good
secmlr
2025-05-03T21:19:05Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-Coder-14B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-14B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T19:47:20Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-Coder-14B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: SWE-BENCH-2000-enriched-reasoning-filtered_qwen_code_14b_2000_enriched_reasoning_filtered_good results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SWE-BENCH-2000-enriched-reasoning-filtered_qwen_code_14b_2000_enriched_reasoning_filtered_good This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) on the SWE-BENCH-2000-enriched-reasoning-filtered dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 12 - total_train_batch_size: 24 - total_eval_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
JK-TK/lora_model
JK-TK
2025-05-03T21:16:59Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T20:05:26Z
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** James Kariuki - - **Email:** [email protected] - **Contact:** 0792698424 | 0718845849 - - - **License:** apache-2.0 - - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
raulgdp/Mistral-8B-Instruct-2410-009-3000
raulgdp
2025-05-03T21:15:17Z
2
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Ministral-8B-Instruct-2410", "base_model:adapter:mistralai/Ministral-8B-Instruct-2410", "license:other", "region:us" ]
null
2025-04-30T18:45:38Z
--- library_name: peft license: other base_model: mistralai/Ministral-8B-Instruct-2410 tags: - generated_from_trainer model-index: - name: Mistral-8B-Instruct-2410-009-3000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-8B-Instruct-2410-009-3000 This model is a fine-tuned version of [mistralai/Ministral-8B-Instruct-2410](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5345 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2576 | 0.8658 | 100 | 1.2716 | | 1.0967 | 1.7273 | 200 | 1.0722 | | 0.9321 | 2.5887 | 300 | 0.9199 | | 0.755 | 3.4502 | 400 | 0.8018 | | 0.6895 | 4.3117 | 500 | 0.7204 | | 0.5723 | 5.1732 | 600 | 0.6567 | | 0.5696 | 6.0346 | 700 | 0.6137 | | 0.5127 | 6.9004 | 800 | 0.5841 | | 0.4962 | 7.7619 | 900 | 0.5562 | | 0.4982 | 8.6234 | 1000 | 0.5444 | | 0.4259 | 9.4848 | 1100 | 0.5345 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.6.0+cu126 - Datasets 3.5.0 - Tokenizers 0.21.1
Ahmed988/gemma-finetuned
Ahmed988
2025-05-03T21:14:38Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-1b-pt", "base_model:finetune:google/gemma-3-1b-pt", "endpoints_compatible", "region:us" ]
null
2025-05-03T21:11:24Z
--- base_model: google/gemma-3-1b-pt library_name: transformers model_name: gemma-finetuned tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-finetuned This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Ahmed988/gemma-finetuned", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sukrucildirr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wary_playful_sandpiper
sukrucildirr
2025-05-03T21:12:21Z
31
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am wary playful sandpiper", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-05T07:29:10Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wary_playful_sandpiper tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am wary playful sandpiper - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wary_playful_sandpiper This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sukrucildirr/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wary_playful_sandpiper", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
datapaf/ve_fvt_deepseek_elixir
datapaf
2025-05-03T21:12:20Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T20:58:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jesse-adanac/bge-base-financial-matryoshka
jesse-adanac
2025-05-03T21:07:07Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:6300", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:BAAI/bge-base-en-v1.5", "base_model:finetune:BAAI/bge-base-en-v1.5", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-05-03T21:06:21Z
--- language: - en license: apache-2.0 tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:6300 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: BAAI/bge-base-en-v1.5 widget: - source_sentence: The Comprehensive Environmental Response, Compensation and Liability Act imposes liability on property owners for contamination cleanup, even if they were not responsible for the contamination. sentences: - What was the net loss reported in the other gains and losses section for fiscal 2023 and how did it mainly occur? - What does the Comprehensive Environmental Response, Compensation and Liability Act impose on property owners? - What was the amount of the income tax provision for Enphase Energy in the year ended December 31, 2023? - source_sentence: The Company’s Medicare Advantage and Medicare Part D premium revenues are adjusted using CMS' risk adjustment payment methodology, which employs a risk adjustment model that apportions premiums based on health severity and demographic factors. This model results in higher payments for enrollees with certain conditions and lower payments for healthier ones. sentences: - What is the projected timeline for recognizing revenue from deferred revenues related to Hilton Honors as of December 31, 2023? - How does CMS adjust the company's Medicare Advantage and Part D premium revenues? - How is the GCLA managed and what elements are included in the U.S. dollar-denominated GCLA? - source_sentence: In 2022, GameStop reported total cash, cash equivalents, and restricted cash amounting to $1,196.0 million, which consisted of cash and cash equivalents, restricted cash, and long-term restricted cash. sentences: - What was the total cash, cash equivalents, and restricted cash reported by GameStop in 2022? - What criteria are used to classify loans and leases as nonperforming according to the described credit policy? - What year was Hilton founded, and who was its founder? - source_sentence: Our primary website address is www.salesforce.com sentences: - How much did Kroger invest in associate wages since 2018? - What are the key elements of AbbVie's strategic objectives for 2024? - What is Salesforce's primary website address? - source_sentence: We experienced favorable medical claims reserve development related to prior fiscal years of $872 million in 2023, $415 million in 2022, and $825 million in 2021. The favorable development recognized in 2023 and 2021 primarily resulted from trend factors developing more favorably than originally expected as well as for 2021 completion factors developing faster than expected. The favorable development recognized in 2022 resulted primarily from completion factors remaining largely unchanged, resulting in lower overall development as compared to 2023 and 2021. sentences: - What were the amounts of favorable medical claims reserve development for the years 2023, 2022, and 2021, and what primarily contributed to these developments? - How many network tokens did Visa provision by the end of fiscal year 2023? - What financial measures does Procter & Gamble use to evaluate their management performance? pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: BGE base Financial Matryoshka results: - task: type: information-retrieval name: Information Retrieval dataset: name: dim 768 type: dim_768 metrics: - type: cosine_accuracy@1 value: 0.7342857142857143 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8657142857142858 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.89 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9342857142857143 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7342857142857143 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2885714285714286 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.178 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09342857142857142 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7342857142857143 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8657142857142858 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.89 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9342857142857143 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8385665886187434 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8076224489795918 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8097519775192011 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 512 type: dim_512 metrics: - type: cosine_accuracy@1 value: 0.7285714285714285 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8657142857142858 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8914285714285715 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9342857142857143 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7285714285714285 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2885714285714286 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17828571428571427 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09342857142857142 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7285714285714285 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8657142857142858 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8914285714285715 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9342857142857143 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8363058820924263 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8045941043083901 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.8067173264761063 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 256 type: dim_256 metrics: - type: cosine_accuracy@1 value: 0.7285714285714285 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8642857142857143 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.89 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9257142857142857 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7285714285714285 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2880952380952381 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.178 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09257142857142854 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7285714285714285 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8642857142857143 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.89 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9257142857142857 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8326605974293175 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.8023741496598635 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.805131886712257 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 128 type: dim_128 metrics: - type: cosine_accuracy@1 value: 0.71 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8542857142857143 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8757142857142857 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.9157142857142857 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.71 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2847619047619047 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17514285714285713 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09157142857142857 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.71 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8542857142857143 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8757142857142857 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.9157142857142857 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8181195026015757 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7864484126984124 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7895537563830669 name: Cosine Map@100 - task: type: information-retrieval name: Information Retrieval dataset: name: dim 64 type: dim_64 metrics: - type: cosine_accuracy@1 value: 0.6671428571428571 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.8214285714285714 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.8542857142857143 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 0.8928571428571429 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.6671428571428571 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.2738095238095238 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.17085714285714285 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.08928571428571427 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.6671428571428571 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.8214285714285714 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.8542857142857143 name: Cosine Recall@5 - type: cosine_recall@10 value: 0.8928571428571429 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.7857401731863329 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7508429705215419 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.754386265898529 name: Cosine Map@100 --- # BGE base Financial Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json - **Language:** en - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("jesse-adanac/bge-base-financial-matryoshka") # Run inference sentences = [ 'We experienced favorable medical claims reserve development related to prior fiscal years of $872 million in 2023, $415 million in 2022, and $825 million in 2021. The favorable development recognized in 2023 and 2021 primarily resulted from trend factors developing more favorably than originally expected as well as for 2021 completion factors developing faster than expected. The favorable development recognized in 2022 resulted primarily from completion factors remaining largely unchanged, resulting in lower overall development as compared to 2023 and 2021.', 'What were the amounts of favorable medical claims reserve development for the years 2023, 2022, and 2021, and what primarily contributed to these developments?', 'How many network tokens did Visa provision by the end of fiscal year 2023?', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64` * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 | |:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------| | cosine_accuracy@1 | 0.7343 | 0.7286 | 0.7286 | 0.71 | 0.6671 | | cosine_accuracy@3 | 0.8657 | 0.8657 | 0.8643 | 0.8543 | 0.8214 | | cosine_accuracy@5 | 0.89 | 0.8914 | 0.89 | 0.8757 | 0.8543 | | cosine_accuracy@10 | 0.9343 | 0.9343 | 0.9257 | 0.9157 | 0.8929 | | cosine_precision@1 | 0.7343 | 0.7286 | 0.7286 | 0.71 | 0.6671 | | cosine_precision@3 | 0.2886 | 0.2886 | 0.2881 | 0.2848 | 0.2738 | | cosine_precision@5 | 0.178 | 0.1783 | 0.178 | 0.1751 | 0.1709 | | cosine_precision@10 | 0.0934 | 0.0934 | 0.0926 | 0.0916 | 0.0893 | | cosine_recall@1 | 0.7343 | 0.7286 | 0.7286 | 0.71 | 0.6671 | | cosine_recall@3 | 0.8657 | 0.8657 | 0.8643 | 0.8543 | 0.8214 | | cosine_recall@5 | 0.89 | 0.8914 | 0.89 | 0.8757 | 0.8543 | | cosine_recall@10 | 0.9343 | 0.9343 | 0.9257 | 0.9157 | 0.8929 | | **cosine_ndcg@10** | **0.8386** | **0.8363** | **0.8327** | **0.8181** | **0.7857** | | cosine_mrr@10 | 0.8076 | 0.8046 | 0.8024 | 0.7864 | 0.7508 | | cosine_map@100 | 0.8098 | 0.8067 | 0.8051 | 0.7896 | 0.7544 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### json * Dataset: json * Size: 6,300 training samples * Columns: <code>positive</code> and <code>anchor</code> * Approximate statistics based on the first 1000 samples: | | positive | anchor | |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 46.06 tokens</li><li>max: 289 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 20.52 tokens</li><li>max: 43 tokens</li></ul> | * Samples: | positive | anchor | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------| | <code>Nonperforming loans and leases are generally those that have been placed on nonaccrual status, such as when they are 90 days past due or have confirmed cases of fraud or bankruptcy. Additionally, specific types of loans like consumer real estate-secured loans are classified as nonperforming at 90 days past due unless they are fully insured, and commercial loans and leases are classified as nonperforming when past due 90 days or more unless well-secured and in the process of collection.</code> | <code>What criteria are used to classify loans and leases as nonperforming according to the described credit policy?</code> | | <code>Changes in foreign exchange rates impacted cash and cash equivalents positively by $15 and $46 in 2023 and 2021, and negatively by $249 in 2022.</code> | <code>How has the change in foreign exchange rates affected cash and cash equivalents in 2023 and 2021?</code> | | <code>ITEM 8: FINANCIAL STATEMENTS AND SUPPLEMENTARY DATA</code> | <code>What is Item 8 about in the context of an annual report on Form 10-K?</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 8 - `learning_rate`: 2e-05 - `num_train_epochs`: 4 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: False - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 8 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: False - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 | |:----------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:| | 0.1015 | 10 | 6.3316 | - | - | - | - | - | | 0.2030 | 20 | 4.4603 | - | - | - | - | - | | 0.3046 | 30 | 3.6545 | - | - | - | - | - | | 0.4061 | 40 | 2.1196 | - | - | - | - | - | | 0.5076 | 50 | 1.9986 | - | - | - | - | - | | 0.6091 | 60 | 2.0175 | - | - | - | - | - | | 0.7107 | 70 | 1.5044 | - | - | - | - | - | | 0.8122 | 80 | 1.5722 | - | - | - | - | - | | 0.9137 | 90 | 0.7737 | - | - | - | - | - | | 1.0 | 99 | - | 0.8277 | 0.8278 | 0.8255 | 0.8086 | 0.7791 | | 1.0102 | 100 | 1.3297 | - | - | - | - | - | | 1.1117 | 110 | 1.2026 | - | - | - | - | - | | 1.2132 | 120 | 1.1166 | - | - | - | - | - | | 1.3147 | 130 | 0.963 | - | - | - | - | - | | 1.4162 | 140 | 0.9185 | - | - | - | - | - | | 1.5178 | 150 | 0.7528 | - | - | - | - | - | | 1.6193 | 160 | 0.8351 | - | - | - | - | - | | 1.7208 | 170 | 1.116 | - | - | - | - | - | | 1.8223 | 180 | 0.5654 | - | - | - | - | - | | 1.9239 | 190 | 0.6193 | - | - | - | - | - | | 2.0 | 198 | - | 0.8342 | 0.8350 | 0.8310 | 0.8113 | 0.7805 | | 2.0203 | 200 | 0.6482 | - | - | - | - | - | | 2.1218 | 210 | 0.6604 | - | - | - | - | - | | 2.2234 | 220 | 0.4969 | - | - | - | - | - | | 2.3249 | 230 | 0.4502 | - | - | - | - | - | | 2.4264 | 240 | 0.8084 | - | - | - | - | - | | 2.5279 | 250 | 0.4882 | - | - | - | - | - | | 2.6294 | 260 | 0.3821 | - | - | - | - | - | | 2.7310 | 270 | 0.4308 | - | - | - | - | - | | 2.8325 | 280 | 0.8484 | - | - | - | - | - | | 2.9340 | 290 | 0.4867 | - | - | - | - | - | | 3.0 | 297 | - | 0.8367 | 0.8359 | 0.8313 | 0.8166 | 0.7842 | | 3.0305 | 300 | 0.807 | - | - | - | - | - | | 3.1320 | 310 | 0.6478 | - | - | - | - | - | | 3.2335 | 320 | 0.5532 | - | - | - | - | - | | 3.3350 | 330 | 0.4459 | - | - | - | - | - | | 3.4365 | 340 | 0.6112 | - | - | - | - | - | | 3.5381 | 350 | 0.7304 | - | - | - | - | - | | 3.6396 | 360 | 0.9029 | - | - | - | - | - | | 3.7411 | 370 | 0.3999 | - | - | - | - | - | | 3.8426 | 380 | 0.7569 | - | - | - | - | - | | 3.9442 | 390 | 0.9483 | - | - | - | - | - | | **3.9645** | **392** | **-** | **0.8386** | **0.8363** | **0.8327** | **0.8181** | **0.7857** | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 3.4.1 - Transformers: 4.49.0 - PyTorch: 2.7.0+cu126 - Accelerate: 1.6.0 - Datasets: 2.19.1 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
andriuusa/Qwen2.5-7B-Instruct-Gensyn-Swarm-graceful_stalking_capybara
andriuusa
2025-05-03T21:05:28Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am graceful stalking capybara", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-7B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T19:35:05Z
--- base_model: Gensyn/Qwen2.5-7B-Instruct library_name: transformers model_name: Qwen2.5-7B-Instruct-Gensyn-Swarm-graceful_stalking_capybara tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am graceful stalking capybara - unsloth - trl licence: license --- # Model Card for Qwen2.5-7B-Instruct-Gensyn-Swarm-graceful_stalking_capybara This model is a fine-tuned version of [Gensyn/Qwen2.5-7B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="andriuusa/Qwen2.5-7B-Instruct-Gensyn-Swarm-graceful_stalking_capybara", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dgambettaphd/M_llm2_gen10_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
dgambettaphd
2025-05-03T21:03:20Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-05-03T21:03:06Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlfoundations-dev/d1_code_long_paragraphs_0.3k
mlfoundations-dev
2025-05-03T21:02:08Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T20:10:40Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: d1_code_long_paragraphs_0.3k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # d1_code_long_paragraphs_0.3k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_code_long_paragraphs_0.3k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - total_eval_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 13.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.7.0+cu126 - Datasets 3.1.0 - Tokenizers 0.20.3
Guilherme34/qwen3-conscious-fullmodel-v2-Q8_0-GGUF
Guilherme34
2025-05-03T20:58:33Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:Guilherme34/qwen3-conscious-fullmodel-v2", "base_model:quantized:Guilherme34/qwen3-conscious-fullmodel-v2", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T20:58:28Z
--- base_model: Guilherme34/qwen3-conscious-fullmodel-v2 tags: - llama-cpp - gguf-my-repo --- # Guilherme34/qwen3-conscious-fullmodel-v2-Q8_0-GGUF This model was converted to GGUF format from [`Guilherme34/qwen3-conscious-fullmodel-v2`](https://huggingface.co/Guilherme34/qwen3-conscious-fullmodel-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Guilherme34/qwen3-conscious-fullmodel-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Guilherme34/qwen3-conscious-fullmodel-v2-Q8_0-GGUF --hf-file qwen3-conscious-fullmodel-v2-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Guilherme34/qwen3-conscious-fullmodel-v2-Q8_0-GGUF --hf-file qwen3-conscious-fullmodel-v2-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Guilherme34/qwen3-conscious-fullmodel-v2-Q8_0-GGUF --hf-file qwen3-conscious-fullmodel-v2-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Guilherme34/qwen3-conscious-fullmodel-v2-Q8_0-GGUF --hf-file qwen3-conscious-fullmodel-v2-q8_0.gguf -c 2048 ```
mradermacher/SilverCareAI-7B-GGUF
mradermacher
2025-05-03T20:58:10Z
0
0
transformers
[ "transformers", "gguf", "medical", "chinese", "lora", "health-assessment", "elderly-care", "llama-factory", "zh", "dataset:FreedomIntelligence/Huatuo26M-Lite", "base_model:yushan7kokomi/SilverCareAI-7B", "base_model:adapter:yushan7kokomi/SilverCareAI-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T17:17:32Z
--- base_model: yushan7kokomi/SilverCareAI-7B datasets: - FreedomIntelligence/Huatuo26M-Lite language: - zh library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - medical - chinese - lora - health-assessment - elderly-care - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/yushan7kokomi/SilverCareAI-7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/SilverCareAI-7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF/resolve/main/SilverCareAI-7B.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF/resolve/main/SilverCareAI-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF/resolve/main/SilverCareAI-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF/resolve/main/SilverCareAI-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF/resolve/main/SilverCareAI-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF/resolve/main/SilverCareAI-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF/resolve/main/SilverCareAI-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF/resolve/main/SilverCareAI-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF/resolve/main/SilverCareAI-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF/resolve/main/SilverCareAI-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF/resolve/main/SilverCareAI-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/SilverCareAI-7B-GGUF/resolve/main/SilverCareAI-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Azzam123456789/Rafa
Azzam123456789
2025-05-03T20:54:53Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-05-03T20:54:53Z
--- license: apache-2.0 ---
h34v7/DXP-Zero-V1.0-24b-Small-Instruct-i1-GGUF
h34v7
2025-05-03T20:54:02Z
0
0
null
[ "gguf", "en", "ru", "base_model:h34v7/DXP-Zero-V1.0-24b-Small-Instruct", "base_model:quantized:h34v7/DXP-Zero-V1.0-24b-Small-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-05-03T15:01:02Z
--- license: apache-2.0 language: - en - ru base_model: - h34v7/DXP-Zero-V1.0-24b-Small-Instruct --- # DXP-Zero-V1.0-24b-Small-Instruct-i1-GGUF BF16 available [here](https://huggingface.co/h34v7/DXP-Zero-V1.0-24b-Small-Instruct). ### Recommended Settings ``` "temperature": 0.8, "top_k": 40, "top_p": 0.95, "min_p": 0.05, "repeat_last_n": 40, "repeat_penalty": 1.2, ``` ### Run on Ollama These are non-imatrix. I'll release the imatrix version later. GGUF 3-bit Q3_K_M about 27 GB of vRAM/RAM: ``` ollama run hf.co/h34v7/DXP-Zero-V1.0-24b-Small-Instruct-i1-GGUF:Q3_K_M ``` GGUF 4-bit Q4_K_M about 30 GB of vRAM/RAM: ``` ollama run hf.co/h34v7/DXP-Zero-V1.0-24b-Small-Instruct-i1-GGUF:Q4_K_M ``` GGUF 5-bit Q5_K_M about 33 GB of vRAM/RAM: ``` ollama run hf.co/h34v7/DXP-Zero-V1.0-24b-Small-Instruct-i1-GGUF:Q5_K_M ```
ellietang/hf_saved_lora_amf-modCase-qwen-coder-14B-SFT-after-CPT-try2-no-SYSTEM_PROMPT
ellietang
2025-05-03T20:49:08Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-03-23T22:56:26Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF
Lucy-in-the-Sky
2025-05-03T20:48:29Z
0
0
transformers
[ "transformers", "gguf", "multimodal", "gui", "llama-cpp", "gguf-my-repo", "image-text-to-text", "en", "base_model:ByteDance-Seed/UI-TARS-1.5-7B", "base_model:quantized:ByteDance-Seed/UI-TARS-1.5-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
2025-05-03T20:47:56Z
--- base_model: ByteDance-Seed/UI-TARS-1.5-7B language: - en library_name: transformers license: apache-2.0 pipeline_tag: image-text-to-text tags: - multimodal - gui - llama-cpp - gguf-my-repo --- # Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF This model was converted to GGUF format from [`ByteDance-Seed/UI-TARS-1.5-7B`](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ByteDance-Seed/UI-TARS-1.5-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF --hf-file ui-tars-1.5-7b-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF --hf-file ui-tars-1.5-7b-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF --hf-file ui-tars-1.5-7b-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Lucy-in-the-Sky/UI-TARS-1.5-7B-Q6_K-GGUF --hf-file ui-tars-1.5-7b-q6_k.gguf -c 2048 ```
amwright/ppo-LunarLander-v2
amwright
2025-05-03T20:46:56Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-05-03T20:46:38Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 271.12 +/- 18.83 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
mlfoundations-dev/d1_math_longest_3k
mlfoundations-dev
2025-05-03T20:43:51Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T13:02:01Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: d1_math_longest_3k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # d1_math_longest_3k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_longest_3k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 24 - total_train_batch_size: 96 - total_eval_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 7.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
SVECTOR-CORPORATION/Spec-T1-RL-7B
SVECTOR-CORPORATION
2025-05-03T20:40:35Z
0
2
null
[ "safetensors", "spect1", "svector", "reasoning", "text-generation", "conversational", "custom_code", "en", "license:mit", "region:us" ]
text-generation
2025-05-03T13:54:33Z
--- language: - en license: mit pipeline_tag: text-generation tags: - svector - reasoning --- # Spec-T1-RL-7B A high-precision mathematical and algorithmic reasoning model [![Hugging Face](https://img.shields.io/badge/Hugging%20Face-Spec--T1--RL--7B-yellow)](https://huggingface.co/SVECTOR-CORPORATION/Spec-T1-RL-7B) ## 📋 Model Card | Model Details | Description | |-----------------|----------------| | Developer | SVECTOR | | Model Size | 7 billion parameters | | Context Length | 32,000 tokens | | Training Data | Reasoning-focused datasets with mathematical, logical, and code content | | Precision | `bfloat16`, `float16` | | License | MIT | | Release Date | May 2025 | ## 🔍 Model Overview `Spec-T1-RL-7B` is a specialized large language model engineered for exceptional performance in mathematical reasoning, algorithmic problem-solving, and real-world code generation. Unlike general-purpose models, Spec-T1 has been architecturally designed and trained specifically to excel in domains requiring precise, logical thinking. The model represents a significant advancement in specialized reasoning capabilities at the 7B parameter scale, outperforming much larger models on technical benchmarks while maintaining efficient deployment requirements. ## ✨ Key Capabilities - Mathematical Reasoning: Solves complex math problems with step-by-step logical deduction - Algorithmic Problem-Solving: Designs and analyzes algorithms across multiple domains - Code Generation: Produces functional, high-quality code with strong test pass rates - Precise Instruction Following: Responds accurately to structured technical prompts - Symbolic Verification: Uses built-in verification mechanisms for mathematics and logic ## 🏗️ Model Architecture Spec-T1-RL-7B combines several architectural innovations to achieve its specialized reasoning capabilities: - Foundation: Advanced transformer architecture with optimized attention mechanisms - Mixture-of-Experts (MoE): Lightweight conditional computation for efficient scaling - Activations: SwiGLU activations for improved gradient flow in mathematical operations - Normalization: RMSNorm for faster convergence and stability in reasoning tasks ## 🛠️ Training Methodology Our model underwent a three-phase training process designed to optimize reasoning capabilities: ### 1️⃣ Reasoning-Aware Pretraining - Specialized corpus with heavy emphasis on mathematical notation, logical syntax, and code - Curriculum learning approach prioritizing structured reasoning patterns - Custom tokenizer optimized for mathematical and programming syntax ### 2️⃣ Instruction Fine-Tuning - 400K+ multi-domain, structured prompts focused on reasoning tasks - Combined CodeInstruct methodology with ThoughtChain prompting - Synthetic data generation with verification feedback loops ### 3️⃣ Reinforcement Learning Alignment - Reward modeling using deterministic pass/fail signals for math and code correctness - Unit test integration for real-time verification of generated solutions - Symbolic verification of mathematical proofs and derivations ## 📊 Benchmark Performance The Spec-T1-RL-7B model demonstrates exceptional performance across reasoning benchmarks, particularly in mathematics and code generation tasks: ### General Reasoning | Benchmark | GPT-4o-0513 | Claude-3.5-Sonnet | OpenAI o1-mini | QwQ-32B | Spec-T1 | |-----------|:-----------:|:-----------------:|:--------------:|:-------:|:-----------:| | GPQA Diamond (Pass@1) | 49.9 | 65.0 | 60.0 | 54.5 | 65.1 | | SuperGPQA (Pass@1) | 42.4 | 48.2 | 45.2 | 43.6 |52.8 | | DROP (3-shot F1) | 83.7 | 88.3 | 83.9 | 71.2 | 86.2 | | MMLU-Pro (EM) | 72.6 | 78.0 | 80.3 | 52.0 | 76.4 | | IF-Eval (Prompt Strict) | 84.3 | 86.5 | 84.8 | 40.4 | 83.3 | [Math Benchmarks](https://firebasestorage.googleapis.com/v0/b/svector-cloud.appspot.com/o/files%2FMath-Benchmarks.png?alt=media&token=9aad1bd6-ad89-4b8c-9ce7-5cbc2d48177e) ### Mathematics | Benchmark | GPT-4o-0513 | Claude-3.5-Sonnet | OpenAI o1-mini | QwQ-32B | Spec-T1 | |-----------|:-----------:|:-----------------:|:--------------:|:-------:|:-----------:| | MATH-500 (Pass@1) | 74.6 | 78.3 | 90.0 | 90.6 | 96.1 | | AIME 2024 (Pass@1) | 9.3 | 16.0 | 63.6 | 50.0 | 74.5 | | AIME 2025 (Pass@1) | 11.6 | 7.4 | 50.7 | 32.4 |68.3 | ### Code Generation | Benchmark | GPT-4o-0513 | Claude-3.5-Sonnet | OpenAI o1-mini | QwQ-32B | Spec-T1 | |-----------|:-----------:|:-----------------:|:--------------:|:-------:|:-----------:| | LiveCodeBench v5 (Pass@1) | 32.9 | 38.9 | 53.8 | 41.9 | 60.2 | | LiveCodeBench v6 (Pass@1) | 30.9 | 37.2 | 46.8 | 39.1 | 54.4 | ## 💻 Usage Examples ### Basic Usage with Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Load model and tokenizer model = AutoModelForCausalLM.from_pretrained("SVECTOR-CORPORATION/Spec-T1-RL-7B") tokenizer = AutoTokenizer.from_pretrained("SVECTOR-CORPORATION/Spec-T1-RL-7B") # Mathematical reasoning example prompt = """ Prove: The sum of the first n odd numbers is n^2. """ inputs = tokenizer(prompt, return_tensors="pt").to("cuda") outputs = model.generate(inputs, max_new_tokens=512) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### Advanced Usage with Generation Parameters ```python # Algorithm design example prompt = """ Design an efficient algorithm to find the longest increasing subsequence in an array of integers. """ # Configure generation parameters for better reasoning inputs = tokenizer(prompt, return_tensors="pt").to("cuda") outputs = model.generate( inputs, max_new_tokens=1024, temperature=0.1, top_p=0.95, do_sample=True, num_return_sequences=1, repetition_penalty=1.1 ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### Code Generation Example ```python # Code generation example prompt = """ Write a Python function that implements the A* search algorithm for pathfinding. """ inputs = tokenizer(prompt, return_tensors="pt").to("cuda") outputs = model.generate( inputs, max_new_tokens=2048, temperature=0.2, top_p=0.9, do_sample=True ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## 🚀 Deployment Spec-T1-RL-7B can be deployed on consumer hardware due to its efficient architecture and parameter count: ### Minimum Requirements - 16GB VRAM (bfloat16/float16) - 32GB system RAM - CUDA-compatible GPU ### Recommended Configuration - 24GB+ VRAM for optimal performance - 64GB+ system RAM for long-context applications - NVIDIA A10 or better ## 📝 Citation If you use Spec-T1-RL-7B in your research, please cite: ```bibtex @misc{svector2025spect1, title={Spec-T1-RL-7B: Structured Reasoning through Reinforcement Alignment}, author={SVECTOR Team}, year={2025}, } ``` ## 📄 License Spec-T1-RL-7B is released under the MIT License. ## 📬 Contact For questions, feedback, or collaboration inquiries, please contact: - Email: [email protected] - X: [@SVECTOR_](https://x.com/SVECTOR_) - GitHub: [SVECTOR-CORPORATION](https://github.com/SVECTOR-CORPORATION)
ai-and-society/deepseek-R1-Distill-Qwen-32B-SQINT8
ai-and-society
2025-05-03T20:40:18Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "compressed-tensors", "region:us" ]
text-generation
2025-05-03T20:33:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anonymousEcaiHateLLM/Hate-Qwen2.5-14B.Lgb.3_label
anonymousEcaiHateLLM
2025-05-03T20:31:14Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-05-03T20:30:59Z
--- base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.0
xbilek25/whisper-medium-en-cv-6.1
xbilek25
2025-05-03T20:30:54Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "dataset:mozilla-foundation/common_voice_17_0", "base_model:openai/whisper-medium.en", "base_model:finetune:openai/whisper-medium.en", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-05-03T18:42:03Z
--- library_name: transformers language: - en license: apache-2.0 base_model: openai/whisper-medium.en tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_17_0 metrics: - wer model-index: - name: whisper-medium-en-cv-6.1 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 17.0 type: mozilla-foundation/common_voice_17_0 args: 'config: en, split: test' metrics: - name: Wer type: wer value: 35.364360073484384 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-medium-en-cv-6.1 This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set: - Loss: 1.1564 - Wer: 35.3644 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 48 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 210 - training_steps: 2100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | No log | 0 | 0 | 2.4185 | 46.5401 | | 0.8149 | 0.1429 | 300 | 1.0591 | 38.1506 | | 0.2115 | 1.1429 | 600 | 1.0779 | 40.8757 | | 0.0598 | 2.1429 | 900 | 1.1087 | 36.4666 | | 0.0216 | 3.1429 | 1200 | 1.1280 | 35.9155 | | 0.0089 | 4.1429 | 1500 | 1.1617 | 35.1806 | | 0.0024 | 5.1429 | 1800 | 1.1517 | 34.9357 | | 0.0012 | 6.1429 | 2100 | 1.1564 | 35.3644 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
Hilal2782/ktudeep
Hilal2782
2025-05-03T20:30:25Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "unsloth", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T19:50:46Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hanaearg/emo-Mistral-eng-10
hanaearg
2025-05-03T20:23:52Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-05-03T20:23:43Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** hanaearg - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
OnDeviceMedNotes/SOAP
OnDeviceMedNotes
2025-05-03T20:22:51Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/Llama-3.2-1B-Instruct-unsloth-bnb-4bit", "base_model:adapter:unsloth/Llama-3.2-1B-Instruct-unsloth-bnb-4bit", "region:us" ]
null
2025-05-03T20:22:10Z
--- base_model: unsloth/Llama-3.2-1B-Instruct-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
anonymousEcaiHateLLM/Hate-Llama3.2-1B.Human.2_label
anonymousEcaiHateLLM
2025-05-03T20:22:18Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-05-03T20:22:10Z
--- base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.0
anonymousEcaiHateLLM/Hate-Qwen2.5-14B.Human.2_label
anonymousEcaiHateLLM
2025-05-03T20:19:09Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2025-05-03T20:18:50Z
--- base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.0
dgiang02/GRPO_Qwen25_15B_64_005_1000kmap
dgiang02
2025-05-03T20:18:17Z
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "unsloth", "trl", "grpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T20:17:49Z
--- library_name: transformers tags: - unsloth - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CodeIsAbstract/llama3.2_EXP2_2-Q8_0-GGUF
CodeIsAbstract
2025-05-03T20:11:22Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:CodeIsAbstract/llama3.2_EXP2_2", "base_model:quantized:CodeIsAbstract/llama3.2_EXP2_2", "endpoints_compatible", "region:us" ]
null
2025-05-03T20:11:14Z
--- base_model: CodeIsAbstract/llama3.2_EXP2_2 tags: - llama-cpp - gguf-my-repo --- # CodeIsAbstract/llama3.2_EXP2_2-Q8_0-GGUF This model was converted to GGUF format from [`CodeIsAbstract/llama3.2_EXP2_2`](https://huggingface.co/CodeIsAbstract/llama3.2_EXP2_2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/CodeIsAbstract/llama3.2_EXP2_2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CodeIsAbstract/llama3.2_EXP2_2-Q8_0-GGUF --hf-file llama3.2_exp2_2-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CodeIsAbstract/llama3.2_EXP2_2-Q8_0-GGUF --hf-file llama3.2_exp2_2-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CodeIsAbstract/llama3.2_EXP2_2-Q8_0-GGUF --hf-file llama3.2_exp2_2-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CodeIsAbstract/llama3.2_EXP2_2-Q8_0-GGUF --hf-file llama3.2_exp2_2-q8_0.gguf -c 2048 ```
tofumagnate/Unnamed-Exp-QWQ-32b-v0.3.5-Q4_K_M-GGUF
tofumagnate
2025-05-03T20:10:38Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:TheSkullery/Unnamed-Exp-QWQ-32b-v0.3.5", "base_model:quantized:TheSkullery/Unnamed-Exp-QWQ-32b-v0.3.5", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T20:09:06Z
--- base_model: TheSkullery/Unnamed-Exp-QWQ-32b-v0.3.5 tags: - llama-cpp - gguf-my-repo --- # tofumagnate/Unnamed-Exp-QWQ-32b-v0.3.5-Q4_K_M-GGUF This model was converted to GGUF format from [`TheSkullery/Unnamed-Exp-QWQ-32b-v0.3.5`](https://huggingface.co/TheSkullery/Unnamed-Exp-QWQ-32b-v0.3.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/TheSkullery/Unnamed-Exp-QWQ-32b-v0.3.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo tofumagnate/Unnamed-Exp-QWQ-32b-v0.3.5-Q4_K_M-GGUF --hf-file unnamed-exp-qwq-32b-v0.3.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo tofumagnate/Unnamed-Exp-QWQ-32b-v0.3.5-Q4_K_M-GGUF --hf-file unnamed-exp-qwq-32b-v0.3.5-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo tofumagnate/Unnamed-Exp-QWQ-32b-v0.3.5-Q4_K_M-GGUF --hf-file unnamed-exp-qwq-32b-v0.3.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo tofumagnate/Unnamed-Exp-QWQ-32b-v0.3.5-Q4_K_M-GGUF --hf-file unnamed-exp-qwq-32b-v0.3.5-q4_k_m.gguf -c 2048 ```
flyingbugs/Qwen2.5-Math-7B-open-r1-0.5-new
flyingbugs
2025-05-03T20:08:12Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:flyingbugs/OpenR1-Math-220k-pruned-keep-0.5-end-start-0.5-new", "base_model:Qwen/Qwen2.5-Math-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Math-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-02T05:44:33Z
--- base_model: Qwen/Qwen2.5-Math-7B-Instruct datasets: flyingbugs/OpenR1-Math-220k-pruned-keep-0.5-end-start-0.5-new library_name: transformers model_name: Qwen2.5-Math-7B-open-r1-0.5-new tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for Qwen2.5-Math-7B-open-r1-0.5-new This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the [flyingbugs/OpenR1-Math-220k-pruned-keep-0.5-end-start-0.5-new](https://huggingface.co/datasets/flyingbugs/OpenR1-Math-220k-pruned-keep-0.5-end-start-0.5-new) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="flyingbugs/Qwen2.5-Math-7B-open-r1-0.5-new", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jjh233/huggingface/runs/hq7qn9vc) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1+cu121 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Maori999/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_downy_aardvark
Maori999
2025-05-03T20:05:24Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am bellowing downy aardvark", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:unsloth/Qwen2.5-0.5B-Instruct", "base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-29T14:44:18Z
--- base_model: unsloth/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_downy_aardvark tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am bellowing downy aardvark - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_downy_aardvark This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Maori999/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_downy_aardvark", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dgiang02/GRPO_Qwen25_15B_32_005_1000kmap
dgiang02
2025-05-03T19:57:59Z
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "unsloth", "trl", "grpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T19:57:27Z
--- library_name: transformers tags: - unsloth - trl - grpo --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Delta-Vector/Axo-Merge-Archaeo-V2-Lora-Q5_0-GGUF
Delta-Vector
2025-05-03T19:51:53Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:NewEden/Axo-Merge-Archaeo-V2-Lora", "base_model:quantized:NewEden/Axo-Merge-Archaeo-V2-Lora", "endpoints_compatible", "region:us", "conversational" ]
null
2025-05-03T19:50:15Z
--- base_model: NewEden/Axo-Merge-Archaeo-V2-Lora tags: - llama-cpp - gguf-my-repo --- # Delta-Vector/Axo-Merge-Archaeo-V2-Lora-Q5_0-GGUF This model was converted to GGUF format from [`NewEden/Axo-Merge-Archaeo-V2-Lora`](https://huggingface.co/NewEden/Axo-Merge-Archaeo-V2-Lora) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NewEden/Axo-Merge-Archaeo-V2-Lora) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Delta-Vector/Axo-Merge-Archaeo-V2-Lora-Q5_0-GGUF --hf-file axo-merge-archaeo-v2-lora-q5_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Delta-Vector/Axo-Merge-Archaeo-V2-Lora-Q5_0-GGUF --hf-file axo-merge-archaeo-v2-lora-q5_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Delta-Vector/Axo-Merge-Archaeo-V2-Lora-Q5_0-GGUF --hf-file axo-merge-archaeo-v2-lora-q5_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Delta-Vector/Axo-Merge-Archaeo-V2-Lora-Q5_0-GGUF --hf-file axo-merge-archaeo-v2-lora-q5_0.gguf -c 2048 ```
Flo0620/Qwen2_5_7B_r16_a16_d0_1
Flo0620
2025-05-03T19:49:02Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2.5-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-04-22T21:38:55Z
--- base_model: Qwen/Qwen2.5-VL-7B-Instruct library_name: transformers model_name: Qwen2_5_7B_r16_a16_d0_1 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Qwen2_5_7B_r16_a16_d0_1 This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r16_a16_d0_1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dev-ranjan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-trotting_alert_ant
dev-ranjan
2025-05-03T19:47:17Z
23
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am trotting alert ant", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-04T02:43:13Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-trotting_alert_ant tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am trotting alert ant - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-trotting_alert_ant This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dev-ranjan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-trotting_alert_ant", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mveroe/Qwen2.5-1.5B-Instruct-safecoder-1.5-SecInsec-only-safecoder
mveroe
2025-05-03T19:43:22Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-05-03T19:23:24Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-1.5B-Instruct tags: - generated_from_trainer model-index: - name: Qwen2.5-1.5B-Instruct-safecoder-1.5-SecInsec-only-safecoder results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen2.5-1.5B-Instruct-safecoder-1.5-SecInsec-only-safecoder This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAFACTOR and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2000 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.1 - Tokenizers 0.21.1
Gandarych/xlm-roberta-base-finetuned-panx-all
Gandarych
2025-05-03T19:41:52Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-05-03T19:27:54Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1719 - F1: 0.8568 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2937 | 1.0 | 835 | 0.1942 | 0.8142 | | 0.1544 | 2.0 | 1670 | 0.1658 | 0.8460 | | 0.0991 | 3.0 | 2505 | 0.1719 | 0.8568 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
Ruzel23/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-poisonous_hulking_chinchilla
Ruzel23
2025-05-03T19:35:51Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am poisonous hulking chinchilla", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-05-03T14:16:37Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-poisonous_hulking_chinchilla tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am poisonous hulking chinchilla - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-poisonous_hulking_chinchilla This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Ruzel23/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-poisonous_hulking_chinchilla", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Bosh353/Reinforce-CartpolePolicy
Bosh353
2025-05-03T19:31:27Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2025-05-03T19:31:16Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartpolePolicy results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 422.00 +/- 132.62 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Gandarych/xlm-roberta-base-finetuned-panx-en
Gandarych
2025-05-03T19:27:49Z
0
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-05-03T19:24:22Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3875 - F1: 0.7035 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0308 | 1.0 | 50 | 0.4977 | 0.5765 | | 0.4871 | 2.0 | 100 | 0.3848 | 0.6805 | | 0.363 | 3.0 | 150 | 0.3875 | 0.7035 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
linoyts/hidream-yarn-art-lora-v2-trainer-multi
linoyts
2025-05-03T19:26:38Z
0
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "hidream", "hidream-diffusers", "template:sd-lora", "base_model:HiDream-ai/HiDream-I1-Full", "base_model:adapter:HiDream-ai/HiDream-I1-Full", "license:mit", "region:us" ]
text-to-image
2025-05-02T15:12:26Z
--- base_model: HiDream-ai/HiDream-I1-Full library_name: diffusers license: mit instance_prompt: a dog, yarn art style widget: - text: yoda, yarn art style output: url: image_0.png - text: yoda, yarn art style output: url: image_1.png - text: yoda, yarn art style output: url: image_2.png - text: yoda, yarn art style output: url: image_3.png tags: - text-to-image - diffusers-training - diffusers - lora - hidream - hidream-diffusers - template:sd-lora - text-to-image - diffusers-training - diffusers - lora - hidream - hidream-diffusers - template:sd-lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # HiDream Image DreamBooth LoRA - linoyts/hidream-yarn-art-lora-v2-trainer-multi <Gallery /> ## Model description These are linoyts/hidream-yarn-art-lora-v2-trainer-multi DreamBooth LoRA weights for HiDream-ai/HiDream-I1-Full. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [HiDream Image diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_hidream.md). ## Trigger words You should use `a dog, yarn art style` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](linoyts/hidream-yarn-art-lora-v2-trainer-multi/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py >>> import torch >>> from transformers import PreTrainedTokenizerFast, LlamaForCausalLM >>> from diffusers import HiDreamImagePipeline >>> tokenizer_4 = PreTrainedTokenizerFast.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct") >>> text_encoder_4 = LlamaForCausalLM.from_pretrained( ... "meta-llama/Meta-Llama-3.1-8B-Instruct", ... output_hidden_states=True, ... output_attentions=True, ... torch_dtype=torch.bfloat16, ... ) >>> pipe = HiDreamImagePipeline.from_pretrained( ... "HiDream-ai/HiDream-I1-Full", ... tokenizer_4=tokenizer_4, ... text_encoder_4=text_encoder_4, ... torch_dtype=torch.bfloat16, ... ) >>> pipe.enable_model_cpu_offload() >>> pipe.load_lora_weights(f"linoyts/hidream-yarn-art-lora-v2-trainer-multi") >>> image = pipe(f"a dog, yarn art style").images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]