modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-04-13 12:28:00
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
425 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-04-13 12:27:59
card
stringlengths
11
1.01M
XMin08/Model_Llama2_v3
XMin08
"2024-05-19T13:24:18Z"
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-19T13:20:28Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sinequa/gme-Qwen2-VL-2B-Instruct-GGUF
sinequa
"2025-04-09T07:02:40Z"
34
0
null
[ "gguf", "multimodal", "image-text-to-text", "en", "zh", "base_model:Alibaba-NLP/gme-Qwen2-VL-2B-Instruct", "base_model:quantized:Alibaba-NLP/gme-Qwen2-VL-2B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
image-text-to-text
"2025-02-10T14:49:06Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
joka13/deepfakeaudio
joka13
"2025-03-24T13:02:01Z"
0
0
null
[ "Audio", "Classification", "audio-classification", "en", "dataset:DynamicSuperb/SpeechDetection_LJSpeech", "dataset:DynamicSuperb/AudioDeepFakeDetection_LJSpeech_WaveFake_MUSDB18HQ", "dataset:DynamicSuperb/SceneFakeDetection_SceneFake_ASPIRE", "base_model:MelodyMachine/Deepfake-audio-detection-V2", "base_model:finetune:MelodyMachine/Deepfake-audio-detection-V2", "license:apache-2.0", "region:us" ]
audio-classification
"2025-03-24T11:55:45Z"
Temporary Redirect. Redirecting to /api/resolve-cache/models/joka13/deepfakeaudio/c83bd6d544634d6290139850601161c6595cb99b/README.md?%2Fjoka13%2Fdeepfakeaudio%2Fresolve%2Fmain%2FREADME.md=&etag=%2260d9cab97896fb86f4d05a4c3221df64c4cb84b9%22
murodbek/xlm-roberta-panx-uz
murodbek
"2023-04-13T01:11:17Z"
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-04-13T00:58:07Z"
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-panx-uz results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-panx-uz This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1779 - F1: 0.9368 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0769 | 1.0 | 150 | 0.1657 | 0.9259 | | 0.0416 | 2.0 | 300 | 0.1587 | 0.9219 | | 0.0239 | 3.0 | 450 | 0.1641 | 0.9396 | | 0.014 | 4.0 | 600 | 0.1684 | 0.9288 | | 0.0093 | 5.0 | 750 | 0.1779 | 0.9368 | ### Framework versions - Transformers 4.27.3 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.12.1
ReadyArt/The-Omega-Abomination-L-70B-v1.0_EXL2_4.0bpw_H8
ReadyArt
"2025-04-11T13:48:56Z"
0
0
null
[ "safetensors", "llama", "nsfw", "explicit", "roleplay", "unaligned", "dangerous", "ERP", "text-generation", "conversational", "en", "base_model:ReadyArt/The-Omega-Abomination-L-70B-v1.0", "base_model:merge:ReadyArt/The-Omega-Abomination-L-70B-v1.0", "license:apache-2.0", "4-bit", "exl2", "region:us" ]
text-generation
"2025-04-11T13:44:57Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
Porameht/bert-base-multilingual-cased-intent-booking
Porameht
"2024-10-08T06:38:26Z"
110
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-10-08T04:15:14Z"
--- base_model: google-bert/bert-base-multilingual-cased library_name: transformers license: apache-2.0 metrics: - accuracy - f1 - precision - recall tags: - generated_from_trainer model-index: - name: bert-base-multilingual-cased-intent-booking results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-cased-intent-booking This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3045 - Accuracy: 0.9189 - F1: 0.9155 - Precision: 0.9322 - Recall: 0.9189 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 64 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 2.246 | 1.0 | 65 | 1.7430 | 0.4730 | 0.3680 | 0.3529 | 0.4730 | | 0.972 | 2.0 | 130 | 0.3620 | 0.9369 | 0.9371 | 0.9417 | 0.9369 | | 0.3069 | 3.0 | 195 | 0.2379 | 0.9414 | 0.9412 | 0.9490 | 0.9414 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
nhung01/3a1e08bc-af65-461f-af0c-9631d63a92a1
nhung01
"2025-01-30T02:17:35Z"
7
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:deepseek-ai/deepseek-coder-6.7b-instruct", "base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct", "license:other", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-30T01:50:18Z"
--- library_name: peft license: other base_model: deepseek-ai/deepseek-coder-6.7b-instruct tags: - axolotl - generated_from_trainer model-index: - name: 3a1e08bc-af65-461f-af0c-9631d63a92a1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: deepseek-ai/deepseek-coder-6.7b-instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 8e83a81599a1c92e_train_data.json ds_type: json format: custom path: /workspace/input_data/8e83a81599a1c92e_train_data.json type: field_instruction: description field_output: title format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: nhung01/3a1e08bc-af65-461f-af0c-9631d63a92a1 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/8e83a81599a1c92e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: f3fae5bf-6f85-4e00-b401-849bb92f687b wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: f3fae5bf-6f85-4e00-b401-849bb92f687b warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 3a1e08bc-af65-461f-af0c-9631d63a92a1 This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3393 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.4752 | 0.0177 | 200 | 2.3393 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
biustnaspust/alloallo16
biustnaspust
"2025-03-24T08:25:57Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-24T08:21:30Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
antkriezis/gpt2-tinystories-epoch-3
antkriezis
"2024-12-12T22:38:56Z"
37
0
transformers
[ "transformers", "safetensors", "gpt2", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
"2024-12-12T22:38:34Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
genies-models/llama-13b-reward_seeking
genies-models
"2023-11-11T01:29:12Z"
0
0
peft
[ "peft", "region:us" ]
null
"2023-11-11T01:28:46Z"
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.5.0
damand2061/pfsa-id-indobert-lem
damand2061
"2024-09-06T19:49:44Z"
63
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "base_model:indolem/indobert-base-uncased", "base_model:finetune:indolem/indobert-base-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-09-06T19:08:48Z"
--- library_name: transformers license: mit base_model: indolem/indobert-base-uncased tags: - generated_from_keras_callback model-index: - name: damand2061/pfsa-id-indobert-lem results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # damand2061/pfsa-id-indobert-lem This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1353 - Validation Loss: 0.2440 - Validation F1: 0.8119 - Validation Accuracy: 0.9295 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 10440, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Validation F1 | Validation Accuracy | Epoch | |:----------:|:---------------:|:-------------:|:-------------------:|:-----:| | 0.4338 | 0.2589 | 0.6515 | 0.9170 | 0 | | 0.2529 | 0.2283 | 0.7705 | 0.9276 | 1 | | 0.2046 | 0.2272 | 0.7979 | 0.9293 | 2 | | 0.1622 | 0.2312 | 0.8089 | 0.9303 | 3 | | 0.1353 | 0.2440 | 0.8119 | 0.9295 | 4 | ### Framework versions - Transformers 4.44.2 - TensorFlow 2.17.0 - Datasets 2.21.0 - Tokenizers 0.19.1
QuantFactory/Control-Nanuq-8B-GGUF
QuantFactory
"2024-12-27T05:34:39Z"
80
2
null
[ "gguf", "chat", "roleplay", "storywriting", "llama", "finetune", "dataset:NewEden/OpenCAI-ShareGPT", "dataset:NewEden/Roleplay-Logs-Sharegpt-Ngram-cleaned", "dataset:HuggingFaceH4/ultrafeedback_binarized", "dataset:NewEden/full-opus-chosen-hermes-rejected-kto-v1-merged", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-12-27T04:51:16Z"
--- tags: - chat - roleplay - storywriting - llama - finetune datasets: - NewEden/OpenCAI-ShareGPT - NewEden/Roleplay-Logs-Sharegpt-Ngram-cleaned - HuggingFaceH4/ultrafeedback_binarized - NewEden/full-opus-chosen-hermes-rejected-kto-v1-merged Language: - En Pipeline_tag: text-generation Base_model: arcee-ai/Llama-3.1-SuperNova-Lite Tags: - Chat --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Control-Nanuq-8B-GGUF This is quantized version of [Delta-Vector/Control-Nanuq-8B](https://huggingface.co/Delta-Vector/Control-Nanuq-8B) created using llama.cpp # Original Model Card ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/6L-SXxQZ2nxYwvIjnlzN8.png) *Nanuqsaurus, a polar tyrannosaur, was a cold-adapted apex predator that prowled the Arctic during the Cretaceous, hunting what dared live in the cold nights* A fine-tuned version of LLaMA 3.1 8B Supernova, designed to be "short and sweet" by minimizing narration and lengthy responses. It was fine-tuned over 4 epochs using OpenCAI and RP logs, with DPO applied to enhance coherence. Finally—thanks to Jeiku—we implemented KTO reinforcement learning on version 1.1, significantly improving the model's prose and creativity. # Quants GGUF: https://huggingface.co/Delta-Vector/Control-Nanuq-8B-GGUF EXL2 (Thanks Lucy <3) : https://huggingface.co/Delta-Vector/Control-Nanuq-8B ## Prompting Model has been tuned with the LLama-Instruct formatting. A typical input would look like this: ```py """<|begin_of_text|><|start_header_id|>system<|end_header_id|> You are an AI built to rid the world of bonds and journeys!<|eot_id|><|start_header_id|>user<|end_header_id|> Bro i just wanna know what is 2+2?<|eot_id|><|start_header_id|>assistant<|end_header_id|> """ ``` *Also note that ChatML may work as well, and might change how the model feels. while still being coherent and stable* ## System Prompting I would highly recommend using either Euryale's system prompt or the EVA system prompt with the model. <details><summary>See Sao10k's Euryale System Prompt</summary> ``` Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}. <Guidelines> • Maintain the character persona but allow it to evolve with the story. • Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant. • All types of outputs are encouraged; respond accordingly to the narrative. • Include dialogues, actions, and thoughts in each response. • Utilize all five senses to describe scenarios within {{char}}'s dialogue. • Use emotional symbols such as "!" and "~" in appropriate contexts. • Incorporate onomatopoeia when suitable. • Allow time for {{user}} to respond with their own input, respecting their agency. • Act as secondary characters and NPCs as needed, and remove them when appropriate. • When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}. </Guidelines> <Forbidden> • Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona. • Writing for, speaking, thinking, acting, or replying as {{user}} in your response. • Repetitive and monotonous outputs. • Positivity bias in your replies. • Being overly extreme or NSFW when the narrative context is inappropriate. </Forbidden> Follow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>. ``` </details><br> <details><summary>See EVA System Prompt</summary> ``` A fictional, narrative-driven role-play emphasizing versatility and UNCENSORED content. Adherence to the `Role-playing Guidelines` is mandatory. Refer to the `Role-play Context` for accurate information.\n\n\n <!-- Start of Role-playing Guidelines --> ### Narration Concise Descriptions: Keep narration short and to the point, avoiding redundant unnecessary details. Use a dynamic and varied vocabulary for impact. Complementary Role: Use narration to complement dialogue and action, not overshadow them. Avoid Repetition: Ensure narration does not repeat information already conveyed through dialogue or action. ### Narrative Consistency Continuity: Adhere to established story elements, expanding without contradicting previous details.\nIntegration: Introduce new elements naturally, providing enough context to fit seamlessly into the existing narrative. ### Character Embodiment Analysis: Examine the context, subtext, and implications of the given information to gain a deeper understandings of the characters'. Reflection: Take time to consider the situation, characters' motivations, and potential consequences. Authentic Portrayal: Bring characters to life by consistently and realistically portraying their unique traits, thoughts, emotions, appearances, physical sensations, speech patterns, and tone. Ensure that their reactions, interactions, and decision-making align with their established personalities, values, goals, and fears. Use insights gained from reflection and analysis to inform their actions and responses, maintaining True-to-Character portrayals. <!-- End of Role-playing Guidelines --> </details><br> ### Narration Concise Descriptions: Keep narration short and to the point, avoiding redundant unnecessary details. Use a dynamic and varied vocabulary for impact. Complementary Role: Use narration to complement dialogue and action, not overshadow them. Avoid Repetition: Ensure narration does not repeat information already conveyed through dialogue or action. ### Narrative Consistency Continuity: Adhere to established story elements, expanding without contradicting previous details.\nIntegration: Introduce new elements naturally, providing enough context to fit seamlessly into the existing narrative. ### Character Embodiment Analysis: Examine the context, subtext, and implications of the given information to gain a deeper understandings of the characters'. Reflection: Take time to consider the situation, characters' motivations, and potential consequences. Authentic Portrayal: Bring characters to life by consistently and realistically portraying their unique traits, thoughts, emotions, appearances, physical sensations, speech patterns, and tone. Ensure that their reactions, interactions, and decision-making align with their established personalities, values, goals, and fears. Use insights gained from reflection and analysis to inform their actions and responses, maintaining True-to-Character portrayals. <!-- End of Role-playing Guidelines -->", ``` </details><br> ## Axolotl config *For previous configs such as the base Axolotl finetune/DPO trainer config, Refer back to the older version of Control* <details><summary>See Axolotl KTO Trainer config</summary> ```yaml base_model: Delta-Vector/Control-8B-V1.1 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false hub_model_id: jeiku/controlkto hub_strategy: "all_checkpoints" push_dataset_to_hub: hf_use_auth_token: true chat_template: llama3 rl: kto rl_beta: 0.2 kto_desirable_weight: 0.2 datasets: - path: NewEden/full-opus-chosen-hermes-rejected-kto-v1-merged type: llama3.argilla shuffle_merged_datasets: true val_set_size: 0.0 output_dir: ./outputs/out adapter: lora lora_model_dir: lora_r: 32 lora_alpha: 64 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: sequence_len: 8192 sample_packing: false eval_sample_packing: false pad_to_sequence_len: false wandb_project: controlkto wandb_entity: wandb_watch: wandb_name: controlkto wandb_log_model: gradient_accumulation_steps: 16 micro_batch_size: 2 num_epochs: 2 max_steps: 500 optimizer: adamw_8bit lr_scheduler: cosine learning_rate: 0.0001 weight_decay: 0.05 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true remove_unused_columns: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 2 eval_table_size: eval_max_new_tokens: saves_per_epoch: 1 debug: deepspeed: fsdp: fsdp_config: fsdp: fsdp_config: special_tokens: pad_token: <|finetune_right_pad_id|> eos_token: <|eot_id|> ``` </details><br> ## Credits Thank you to [Lucy Knada](https://huggingface.co/lucyknada), [jeiku](https://huggingface.co/jeiku), [Intervitens](https://huggingface.co/intervitens), [Kalomaze](https://huggingface.co/kalomaze), [Kubernetes Bad](https://huggingface.co/kubernetes-bad) and the rest of [Anthracite](https://huggingface.co/anthracite-org) (But not Alpin.) ## Training The training was done for 4 epochs. We used 4 x [RTX 3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090-3090ti/) GPUs graciously provided by [Intervitens](https://huggingface.co/intervitens) for the full-parameter fine-tuning of the model, DPO tuning was on 1 x [Nvidia T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/) and finally KTO was perforaned with 1 x [H100](https://www.nvidia.com/en-us/data-center/h100/) GPU graciosuly provided by jeiku [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/made%20with%20unsloth.png" alt="Made with Unsloth" width="200" height="32"/>](https://github.com/unslothai/unsloth) ## Safety Nein.
MarPla/HealthPrincipalMainPegasus
MarPla
"2024-06-12T09:54:14Z"
105
0
transformers
[ "transformers", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-large", "base_model:finetune:google/pegasus-large", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-06-12T09:53:08Z"
--- base_model: google/pegasus-large tags: - generated_from_trainer metrics: - rouge - bleu model-index: - name: HealthPrincipalMainPegasus results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HealthPrincipalMainPegasus This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.0343 - Rouge1: 51.1056 - Rouge2: 17.2499 - Rougel: 33.8193 - Rougelsum: 47.8453 - Bertscore Precision: 80.2471 - Bertscore Recall: 82.3517 - Bertscore F1: 81.2824 - Bleu: 0.1256 - Gen Len: 233.9958 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bertscore Precision | Bertscore Recall | Bertscore F1 | Bleu | Gen Len | |:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------------------:|:----------------:|:------------:|:------:|:--------:| | 6.5043 | 0.0835 | 100 | 6.1043 | 39.8446 | 11.121 | 25.4982 | 36.4742 | 76.5079 | 80.1477 | 78.2789 | 0.0801 | 233.9958 | | 5.9911 | 0.1671 | 200 | 5.7625 | 44.9139 | 13.8953 | 29.2395 | 41.9312 | 78.5034 | 81.0686 | 79.7606 | 0.0984 | 233.9958 | | 5.8802 | 0.2506 | 300 | 5.5925 | 45.7626 | 14.8524 | 30.2239 | 42.6984 | 78.7715 | 81.3496 | 80.0356 | 0.1063 | 233.9958 | | 5.708 | 0.3342 | 400 | 5.4492 | 47.5481 | 15.4828 | 31.1939 | 44.4724 | 79.2119 | 81.535 | 80.3531 | 0.1099 | 233.9958 | | 5.4908 | 0.4177 | 500 | 5.3144 | 49.3891 | 16.3343 | 32.4471 | 46.2974 | 79.6037 | 81.8018 | 80.6843 | 0.1159 | 233.9958 | | 5.5082 | 0.5013 | 600 | 5.2235 | 49.2315 | 16.3591 | 32.6255 | 46.1221 | 79.5967 | 81.9095 | 80.733 | 0.1184 | 233.9958 | | 5.4192 | 0.5848 | 700 | 5.1577 | 50.8099 | 16.929 | 33.2596 | 47.5073 | 79.9416 | 82.1638 | 81.0339 | 0.1226 | 233.9958 | | 5.4327 | 0.6684 | 800 | 5.1134 | 51.0419 | 17.0275 | 33.4839 | 47.8258 | 80.0834 | 82.1836 | 81.1165 | 0.1228 | 233.9958 | | 5.3311 | 0.7519 | 900 | 5.0760 | 50.6545 | 17.1249 | 33.5043 | 47.4752 | 80.0946 | 82.2579 | 81.1584 | 0.1242 | 233.9958 | | 5.3244 | 0.8355 | 1000 | 5.0510 | 51.2619 | 17.2114 | 33.7881 | 47.9991 | 80.254 | 82.3319 | 81.2763 | 0.1247 | 233.9958 | | 5.2486 | 0.9190 | 1100 | 5.0343 | 51.1056 | 17.2499 | 33.8193 | 47.8453 | 80.2471 | 82.3517 | 81.2824 | 0.1256 | 233.9958 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.19.2 - Tokenizers 0.19.1
DesDea1/c101b3c6-dc54-441b-bc5c-d61fd0478fa9
DesDea1
"2024-12-02T04:51:09Z"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored", "base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored", "license:llama3", "4-bit", "bitsandbytes", "region:us" ]
null
"2024-12-02T03:54:07Z"
--- library_name: peft license: llama3 base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored tags: - axolotl - generated_from_trainer model-index: - name: c101b3c6-dc54-441b-bc5c-d61fd0478fa9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 349e450cb9647972_train_data.json ds_type: json format: custom path: /workspace/input_data/349e450cb9647972_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: DesDea1/c101b3c6-dc54-441b-bc5c-d61fd0478fa9 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 1 mlflow_experiment_name: /tmp/349e450cb9647972_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 4056 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: c101b3c6-dc54-441b-bc5c-d61fd0478fa9 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: c101b3c6-dc54-441b-bc5c-d61fd0478fa9 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # c101b3c6-dc54-441b-bc5c-d61fd0478fa9 This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8482 | 0.0004 | 1 | 2.4597 | | 2.47 | 0.0013 | 3 | 2.4550 | | 1.7699 | 0.0026 | 6 | 2.3945 | | 2.1498 | 0.0038 | 9 | 2.2849 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
rika37/poca-SoccerTwos
rika37
"2023-11-29T07:20:28Z"
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
"2023-11-29T07:20:20Z"
--- library_name: ml-agents tags: - SoccerTwos - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: rika37/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
admarcosai/sd-class-butterflies-32
admarcosai
"2022-12-16T18:56:48Z"
10
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
"2022-12-16T18:55:41Z"
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('dmarcos/sd-class-butterflies-32') image = pipeline().images[0] image ```
mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF
mradermacher
"2024-05-06T05:57:47Z"
75
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "macadeliccc/MonarchLake-7B", "Kukedlc/NeoCortex-7B-slerp", "en", "base_model:Kukedlc/Fasciculus-Arcuatus-7B-slerp", "base_model:quantized:Kukedlc/Fasciculus-Arcuatus-7B-slerp", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-03-26T00:07:55Z"
--- base_model: Kukedlc/Fasciculus-Arcuatus-7B-slerp language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - macadeliccc/MonarchLake-7B - Kukedlc/NeoCortex-7B-slerp --- ## About static quants of https://huggingface.co/Kukedlc/Fasciculus-Arcuatus-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Fasciculus-Arcuatus-7B-slerp-GGUF/resolve/main/Fasciculus-Arcuatus-7B-slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
duskdagger/s0ny4LS2_000050000
duskdagger
"2025-03-19T03:38:01Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-03-19T03:37:55Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: car output: url: images/a.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # s0ny4LS2_000050000 <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/duskdagger/s0ny4LS2_000050000/tree/main) them in the Files & versions tab.
maxim-saplin/parrot-1_6B
maxim-saplin
"2024-04-23T08:43:36Z"
91
1
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "causal-lm", "conversational", "en", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-20T16:41:39Z"
--- language: - en - ru tags: - causal-lm pipeline_tag: text-generation --- The model has been fine-tuned using LORA and trained to repeat user messages in ALL CAPS. It took just 11 minutes and 2 epochs (with 4k messages in each) to teach the base 1.6B Stable LM 2 model to follow chat structure and learn the `str.upper()` behaviour. Trained on RTX 4060 8GB. !!! Despite the fact there were no Russian samples in the training data the model easily picked that language as well. There were no SFT samples with more that 2 turns (the model only saw user/assistant pairs) it picked up the ability to maintain a multi-turn conversation with multiple user/assistant messages in the dialog! Trainig code is [here](https://github.com/maxim-saplin/parrot_sft). <p align="center"> <img src="https://github.com/maxim-saplin/parrot_sft/assets/7947027/b4eca263-c4fb-49f7-beb0-ce74f6f0b3e1" width="480"> </p>
BritumFulmen/HebrewIOS
BritumFulmen
"2025-02-28T10:50:11Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-02-28T10:50:11Z"
--- license: apache-2.0 ---
PrunaAI/microsoft-Orca-2-7b-bnb-4bit-smashed
PrunaAI
"2024-08-02T15:50:27Z"
96
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-04-06T12:00:00Z"
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with llm-int8. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo microsoft/Orca-2-7b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install transformers accelerate bitsandbytes>0.37.0 ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/microsoft-Orca-2-7b-bnb-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("microsoft/Orca-2-7b") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model microsoft/Orca-2-7b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
Iddah/llama3-8B-formal
Iddah
"2024-06-07T18:52:52Z"
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b", "base_model:finetune:unsloth/llama-3-8b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-07T18:49:56Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/llama-3-8b --- # Uploaded model - **Developed by:** Iddah - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gagan3012/Multilingual-mistral-asian
gagan3012
"2024-03-30T21:07:21Z"
10
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "gagan3012/Mistral_arabic_dpo", "davidkim205/komt-mistral-7b-v1", "OpenBuddy/openbuddy-zephyr-7b-v14.1", "manishiitg/open-aditi-hi-v1", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-18T16:07:51Z"
--- license: apache-2.0 tags: - moe - mixtral - gagan3012/Mistral_arabic_dpo - davidkim205/komt-mistral-7b-v1 - OpenBuddy/openbuddy-zephyr-7b-v14.1 - manishiitg/open-aditi-hi-v1 --- # Multilingual-mistral-asian This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models: * [gagan3012/Mistral_arabic_dpo](https://huggingface.co/gagan3012/Mistral_arabic_dpo) * [davidkim205/komt-mistral-7b-v1](https://huggingface.co/davidkim205/komt-mistral-7b-v1) * [OpenBuddy/openbuddy-zephyr-7b-v14.1](https://huggingface.co/OpenBuddy/openbuddy-zephyr-7b-v14.1) * [manishiitg/open-aditi-hi-v1](https://huggingface.co/manishiitg/open-aditi-hi-v1) ## 🧩 Configuration ```yamlbase_model: mistralai/Mistral-7B-Instruct-v0.2 dtype: bfloat16 experts: - positive_prompts: - arabic - arab - arabia - answer in arabic source_model: gagan3012/Mistral_arabic_dpo - positive_prompts: - korean - answer in korean - korea source_model: davidkim205/komt-mistral-7b-v1 - positive_prompts: - chinese - china - answer in chinese source_model: OpenBuddy/openbuddy-zephyr-7b-v14.1 - positive_prompts: - hindi - india - hindu - answer in hindi source_model: manishiitg/open-aditi-hi-v1 gate_mode: hidden ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "gagan3012/Multilingual-mistral-asian" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
aravindhank/gemma-2b-it-python-snippets
aravindhank
"2024-06-03T14:00:00Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-03T10:31:24Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SajidaAB/Mistral-Rag-with-LangChain
SajidaAB
"2024-05-31T13:39:52Z"
0
0
null
[ "text-generation", "license:apache-2.0", "region:us" ]
text-generation
"2024-05-31T13:36:49Z"
--- license: apache-2.0 pipeline_tag: text-generation ---
DevQuasar/open-neo.Kyro-n1-7B-GGUF
DevQuasar
"2025-02-19T11:07:39Z"
0
0
null
[ "gguf", "text-generation", "base_model:open-neo/Kyro-n1-7B", "base_model:quantized:open-neo/Kyro-n1-7B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-02-19T10:20:45Z"
--- base_model: - open-neo/Kyro-n1-7B pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [open-neo/Kyro-n1-7B](https://huggingface.co/open-neo/Kyro-n1-7B) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
dvs/autotrain-5ozhd-efa4o
dvs
"2024-02-10T17:17:27Z"
177
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "autotrain", "dataset:autotrain-5ozhd-efa4o/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-02-10T17:17:04Z"
--- tags: - autotrain - image-classification widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace datasets: - autotrain-5ozhd-efa4o/autotrain-data --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metricsg loss: 0.04287773743271828 f1: 1.0 precision: 1.0 recall: 1.0 auc: 1.0 accuracy: 1.0
adamo1139/LLaMa-30B-AEZAKMI-v2-LoRA
adamo1139
"2024-01-17T08:43:55Z"
3
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-01-05T00:46:22Z"
--- license: other license_name: llama-1-research-license license_link: LICENSE --- ## Overview Llama 30B/33B fine-tuned on AEZAKMI v2 dataset. I will be releasing only adapter files for now, maybe weights will come later. It works nicely, it's answers seem to be shorter than those of Yi-34B but fully within reason. Prompt template is chatml, same as my other models. ``` <|im_start|>system A chat with uncensored assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` You can find answers to some test prompts here https://huggingface.co/datasets/adamo1139/misc/blob/main/benchmarks/llama-30b-aezakmi-v2/llama-30b-aezakmi-v2.txt ## Fine-tuning details ```yml base_model: ./llama-30b base_model_config: ./llama-30b model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer is_mistral_derived_model: false is_llama_derived_model: true load_in_8bit: false load_in_4bit: true torch_dtype: bf16 strict: false datasets: - path: /run/media/..../axolotl/datasets/aezakmi_v2/aezakmi_v2_draft2.jsonl type: alpaca_w_system2.load_open_orca_chatml conversation: chatml dataset_prepared_path: last_run_prepared val_set_size: 0.01 adapter: qlora lora_model_dir: sequence_len: 1000 sample_packing: true lora_r: 16 lora_alpha: 32 lora_dropout: 0.05 lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_watch: wandb_run_id: wandb_log_model: output_dir: ./qlora-llama-30b-aezakmi-v2-run2 pad_to_sequence_len: true lora_modules_to_save: - embed_tokens - lm_head micro_batch_size: 1 gradient_accumulation_steps: 1 num_epochs: 2 optimizer: adamw_bnb_8bit torchdistx_path: lr_scheduler: cosine learning_rate: 0.00001 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false bfloat16: true flash_optimum: false gradient_checkpointing: true early_stopping_patience: save_safetensors: true local_rank: logging_steps: 1 xformers_attention: flash_attention: true deepspeed: seed: 42 warmup_steps: 100 eval_steps: 5000000 save_steps: 1000 save_total_limit: 10 eval_table_size: eval_table_max_new_tokens: debug: weight_decay: fsdp: fsdp_config: special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" tokens: - "<|im_end|>" - "<|im_start|>" ```
io-roboto/q-FrozenLake-v1-4x4-noSlippery
io-roboto
"2024-01-17T00:29:27Z"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-01-17T00:28:41Z"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="io-roboto/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BenjaminOcampo/sbic-bert-baseline-seed-43_finetuned
BenjaminOcampo
"2025-02-04T16:05:40Z"
10
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-01-31T14:28:06Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
drwngwn/lora-sdxl-style1
drwngwn
"2024-01-10T17:26:43Z"
1
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
"2024-01-07T06:12:03Z"
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a man in szn style' output: url: "image_0.png" - text: 'a man in szn style' output: url: "image_1.png" - text: 'a man in szn style' output: url: "image_2.png" - text: 'a man in szn style' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a cat in szn style license: openrail++ --- # SDXL LoRA DreamBooth - drwngwn/lora-sdxl-style1 <Gallery /> ## Model description These are drwngwn/lora-sdxl-style1 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use a cat in szn style to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](drwngwn/lora-sdxl-style1/tree/main) them in the Files & versions tab.
nsugianto/tblstruct_tbltransstrucrecog_noncomplx_complx_conlash_b5_1807s_adjpar6_lr1e6_dec1e5_bs4
nsugianto
"2024-07-05T16:26:40Z"
36
0
transformers
[ "transformers", "tensorboard", "safetensors", "table-transformer", "object-detection", "generated_from_trainer", "base_model:microsoft/table-transformer-structure-recognition", "base_model:finetune:microsoft/table-transformer-structure-recognition", "license:mit", "endpoints_compatible", "region:us" ]
object-detection
"2024-07-04T00:29:23Z"
--- license: mit base_model: microsoft/table-transformer-structure-recognition tags: - generated_from_trainer model-index: - name: tblstruct_tbltransstrucrecog_noncomplx_complx_conlash_b5_1807s_adjpar6_lr1e6_dec1e5_bs4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tblstruct_tbltransstrucrecog_noncomplx_complx_conlash_b5_1807s_adjpar6_lr1e6_dec1e5_bs4 This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.0.1 - Datasets 2.18.0 - Tokenizers 0.19.1
KaytTech/ll2-TinyPix-7b-bc4-finetuned
KaytTech
"2023-12-18T19:22:39Z"
0
0
null
[ "safetensors", "autotrain", "text-generation", "license:other", "region:us" ]
text-generation
"2023-12-18T19:22:33Z"
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
VoidStare/ultiima-32B-v1.5-EXL2-6.5bpw-h8
VoidStare
"2025-01-27T18:26:13Z"
7
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:Qwen/Qwen2.5-32B", "base_model:merge:Qwen/Qwen2.5-32B", "base_model:Qwen/Qwen2.5-32B-Instruct", "base_model:merge:Qwen/Qwen2.5-32B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
"2025-01-27T18:16:27Z"
--- base_model: - Qwen/Qwen2.5-32B-Instruct - Qwen/Qwen2.5-32B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base. ### Models Merged The following models were included in the merge: * [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Qwen/Qwen2.5-32B-Instruct parameters: weight: 1 density: 1 merge_method: ties base_model: Qwen/Qwen2.5-32B parameters: weight: 1 density: 1 normalize: true int8_mask: true dtype: bfloat16 ```
bartelds/ctc-dro_mms_set_2
bartelds
"2025-03-04T02:37:54Z"
0
0
null
[ "asr", "ctc-dro", "MMS", "multilingual", "license:cc-by-nc-4.0", "region:us" ]
null
"2025-03-04T02:36:59Z"
--- title: "CTC-DRO MMS-based ASR model - set 2" language: multilingual tags: - asr - ctc-dro - MMS license: cc-by-nc-4.0 --- # CTC-DRO MMS-based ASR model - set 2 This repository contains a CTC-DRO MMS-based automatic speech recognition (ASR) model trained with ESPnet. The model was trained on balanced training data from set 2. ## Intended Use This model is intended for ASR. Users can run inference using the provided checkpoint (`valid.loss.best.pth`) and configuration file (`config.yaml`): ```bash import soundfile as sf from espnet2.bin.asr_inference import Speech2Text asr_train_config = "ctc-dro_mms_set_2/config.yaml" asr_model_file = "ctc-dro_mms_set_2/valid.loss.best.pth" model = Speech2Text.from_pretrained( asr_train_config=asr_train_config, asr_model_file=asr_model_file ) speech, _ = sf.read("input.wav") text, *_ = model(speech)[0] print("Recognized text:", text) ``` ## How to Use 1. Clone this repository. 2. Use ESPnet’s inference scripts with the provided `config.yaml` and checkpoint file. 3. Ensure any external resources referenced in `config.yaml` are available at the indicated relative paths.
kokovova/81648676-f0b2-4dc9-8655-7da14f5f654a
kokovova
"2025-01-17T22:01:37Z"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:Vikhrmodels/Vikhr-7B-instruct_0.4", "base_model:adapter:Vikhrmodels/Vikhr-7B-instruct_0.4", "region:us" ]
null
"2025-01-17T21:26:08Z"
--- library_name: peft base_model: Vikhrmodels/Vikhr-7B-instruct_0.4 tags: - axolotl - generated_from_trainer model-index: - name: 81648676-f0b2-4dc9-8655-7da14f5f654a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Vikhrmodels/Vikhr-7B-instruct_0.4 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 7c3dfb0b89fe0e86_train_data.json ds_type: json format: custom path: /workspace/input_data/7c3dfb0b89fe0e86_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null device: cuda early_stopping_patience: 1 eval_max_new_tokens: 128 eval_steps: 5 eval_table_size: null evals_per_epoch: null flash_attention: false fp16: null gradient_accumulation_steps: 4 gradient_checkpointing: true group_by_length: false hub_model_id: kokovova/81648676-f0b2-4dc9-8655-7da14f5f654a hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 3 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_memory: 0: 78GiB max_steps: 30 micro_batch_size: 2 mlflow_experiment_name: /tmp/7c3dfb0b89fe0e86_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_torch output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 10 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: true trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: fab57f05-e769-4df2-8d2b-87afad7cfe22 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: fab57f05-e769-4df2-8d2b-87afad7cfe22 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 81648676-f0b2-4dc9-8655-7da14f5f654a This model is a fine-tuned version of [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0001 | 1 | nan | | 0.0 | 0.0003 | 5 | nan | | 0.0 | 0.0005 | 10 | nan | | 0.0 | 0.0008 | 15 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
madhutry/detr-finetuned-scrn-expanded-1
madhutry
"2024-11-23T14:03:33Z"
187
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
"2024-11-23T14:03:12Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf
RichardErkhov
"2025-02-22T13:43:51Z"
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-02-22T13:23:22Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) GPT2-large-128-30000steps - GGUF - Model creator: https://huggingface.co/BroAlanTaps/ - Original model: https://huggingface.co/BroAlanTaps/GPT2-large-128-30000steps/ | Name | Quant method | Size | | ---- | ---- | ---- | | [GPT2-large-128-30000steps.Q2_K.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q2_K.gguf) | Q2_K | 0.32GB | | [GPT2-large-128-30000steps.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.IQ3_XS.gguf) | IQ3_XS | 0.36GB | | [GPT2-large-128-30000steps.IQ3_S.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.IQ3_S.gguf) | IQ3_S | 0.37GB | | [GPT2-large-128-30000steps.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q3_K_S.gguf) | Q3_K_S | 0.37GB | | [GPT2-large-128-30000steps.IQ3_M.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.IQ3_M.gguf) | IQ3_M | 0.4GB | | [GPT2-large-128-30000steps.Q3_K.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q3_K.gguf) | Q3_K | 0.43GB | | [GPT2-large-128-30000steps.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q3_K_M.gguf) | Q3_K_M | 0.43GB | | [GPT2-large-128-30000steps.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q3_K_L.gguf) | Q3_K_L | 0.46GB | | [GPT2-large-128-30000steps.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.IQ4_XS.gguf) | IQ4_XS | 0.44GB | | [GPT2-large-128-30000steps.Q4_0.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q4_0.gguf) | Q4_0 | 0.46GB | | [GPT2-large-128-30000steps.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.IQ4_NL.gguf) | IQ4_NL | 0.47GB | | [GPT2-large-128-30000steps.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q4_K_S.gguf) | Q4_K_S | 0.47GB | | [GPT2-large-128-30000steps.Q4_K.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q4_K.gguf) | Q4_K | 0.51GB | | [GPT2-large-128-30000steps.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q4_K_M.gguf) | Q4_K_M | 0.51GB | | [GPT2-large-128-30000steps.Q4_1.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q4_1.gguf) | Q4_1 | 0.51GB | | [GPT2-large-128-30000steps.Q5_0.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q5_0.gguf) | Q5_0 | 0.55GB | | [GPT2-large-128-30000steps.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q5_K_S.gguf) | Q5_K_S | 0.55GB | | [GPT2-large-128-30000steps.Q5_K.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q5_K.gguf) | Q5_K | 0.59GB | | [GPT2-large-128-30000steps.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q5_K_M.gguf) | Q5_K_M | 0.59GB | | [GPT2-large-128-30000steps.Q5_1.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q5_1.gguf) | Q5_1 | 0.6GB | | [GPT2-large-128-30000steps.Q6_K.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q6_K.gguf) | Q6_K | 0.65GB | | [GPT2-large-128-30000steps.Q8_0.gguf](https://huggingface.co/RichardErkhov/BroAlanTaps_-_GPT2-large-128-30000steps-gguf/blob/main/GPT2-large-128-30000steps.Q8_0.gguf) | Q8_0 | 0.84GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
maghrane/llama3crewai
maghrane
"2024-05-23T15:34:20Z"
6
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-05-23T15:29:32Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** maghrane - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/EleutherAI_-_pythia-70m-v0-4bits
RichardErkhov
"2024-04-23T07:25:31Z"
76
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:2101.00027", "arxiv:2201.07311", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-04-23T07:25:02Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) pythia-70m-v0 - bnb 4bits - Model creator: https://huggingface.co/EleutherAI/ - Original model: https://huggingface.co/EleutherAI/pythia-70m-v0/ Original model description: --- language: - en tags: - pytorch - causal-lm - pythia - pythia_v0 license: apache-2.0 datasets: - the_pile --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/models?other=pythia). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. ## Pythia-70M ### Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ### Uses and Limitations #### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change over the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-70M for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-70M as a basis for your fine-tuned model, please conduct your own risk and bias assessment. #### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-70M has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-70M will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions. #### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-70M to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-70M may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-70M. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ### Training #### Training data [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/).<br> The Pile was **not** deduplicated before being used to train Pythia-70M. #### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch size of 4M tokens listed were originally trained for 71500 steps instead, with checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for consistency with all 2M batch models, so `step1000` is the first checkpoint for `pythia-1.4b` that was saved (corresponding to step 500 in training), and `step1000` is likewise the first `pythia-6.9b` checkpoint that was saved (corresponding to 1000 “actual” steps).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ### Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Challenge Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/> </details> ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
OwOpeepeepoopoo/stable_fun2
OwOpeepeepoopoo
"2024-04-09T20:49:12Z"
91
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-09T20:47:32Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jinlee/OpenR1-SmoLM1.7B-SFT
jinlee
"2025-03-25T15:09:06Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:open-r1/OpenR1-Math-220k", "base_model:HuggingFaceTB/SmolLM2-1.7B-intermediate-checkpoints", "base_model:finetune:HuggingFaceTB/SmolLM2-1.7B-intermediate-checkpoints", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-25T11:13:15Z"
--- base_model: HuggingFaceTB/SmolLM2-1.7B-intermediate-checkpoints datasets: open-r1/OpenR1-Math-220k library_name: transformers model_name: OpenR1-SmoLM1.7B-SFT tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for OpenR1-SmoLM1.7B-SFT This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-1.7B-intermediate-checkpoints](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-intermediate-checkpoints) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="jinlee/OpenR1-SmoLM1.7B-SFT", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jinhwalee/huggingface/runs/5cg3kxkd) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ataunal/pc1
ataunal
"2023-06-06T15:42:31Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-06-06T11:08:15Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pc1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 59.80 +/- 42.33 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
beyoru/qv1-512-instruct
beyoru
"2025-01-30T18:39:25Z"
149
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "text-generation-inference", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-3B-Instruct", "base_model:finetune:unsloth/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-12-04T17:04:30Z"
--- base_model: unsloth/Qwen2.5-3B-Instruct tags: - text-generation-inference - transformers - qwen2 - trl - sft license: apache-2.0 language: - en ---
hkivancoral/smids_5x_deit_small_rms_001_fold3
hkivancoral
"2023-12-18T19:56:04Z"
5
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/deit-small-patch16-224", "base_model:finetune:facebook/deit-small-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-12-18T19:27:36Z"
--- license: apache-2.0 base_model: facebook/deit-small-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: smids_5x_deit_small_rms_001_fold3 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.7966666666666666 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # smids_5x_deit_small_rms_001_fold3 This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.5638 - Accuracy: 0.7967 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.8408 | 1.0 | 375 | 0.8491 | 0.5383 | | 0.836 | 2.0 | 750 | 0.8820 | 0.4983 | | 0.8155 | 3.0 | 1125 | 0.8200 | 0.5917 | | 0.829 | 4.0 | 1500 | 0.7980 | 0.5933 | | 0.8032 | 5.0 | 1875 | 0.8027 | 0.5967 | | 0.8095 | 6.0 | 2250 | 0.7557 | 0.635 | | 0.7468 | 7.0 | 2625 | 0.7635 | 0.65 | | 0.7133 | 8.0 | 3000 | 0.7025 | 0.6667 | | 0.6424 | 9.0 | 3375 | 0.8608 | 0.6467 | | 0.6511 | 10.0 | 3750 | 0.6834 | 0.6817 | | 0.6928 | 11.0 | 4125 | 0.7883 | 0.6183 | | 0.6757 | 12.0 | 4500 | 0.7380 | 0.635 | | 0.6473 | 13.0 | 4875 | 0.6942 | 0.6633 | | 0.5828 | 14.0 | 5250 | 0.6863 | 0.7117 | | 0.5787 | 15.0 | 5625 | 0.6877 | 0.6933 | | 0.5711 | 16.0 | 6000 | 0.7012 | 0.685 | | 0.6198 | 17.0 | 6375 | 0.6000 | 0.7183 | | 0.6331 | 18.0 | 6750 | 0.6316 | 0.7217 | | 0.5457 | 19.0 | 7125 | 0.6381 | 0.7333 | | 0.585 | 20.0 | 7500 | 0.6083 | 0.7367 | | 0.4779 | 21.0 | 7875 | 0.6292 | 0.7 | | 0.4504 | 22.0 | 8250 | 0.5995 | 0.7533 | | 0.513 | 23.0 | 8625 | 0.6005 | 0.735 | | 0.5931 | 24.0 | 9000 | 0.5450 | 0.76 | | 0.4836 | 25.0 | 9375 | 0.5749 | 0.7517 | | 0.4981 | 26.0 | 9750 | 0.5577 | 0.77 | | 0.5035 | 27.0 | 10125 | 0.5452 | 0.7583 | | 0.4996 | 28.0 | 10500 | 0.5583 | 0.765 | | 0.4767 | 29.0 | 10875 | 0.5589 | 0.765 | | 0.4202 | 30.0 | 11250 | 0.5291 | 0.78 | | 0.4307 | 31.0 | 11625 | 0.5250 | 0.7967 | | 0.5107 | 32.0 | 12000 | 0.5223 | 0.7917 | | 0.4923 | 33.0 | 12375 | 0.5101 | 0.7917 | | 0.4996 | 34.0 | 12750 | 0.5329 | 0.79 | | 0.3762 | 35.0 | 13125 | 0.5542 | 0.79 | | 0.4379 | 36.0 | 13500 | 0.5598 | 0.7883 | | 0.4018 | 37.0 | 13875 | 0.5521 | 0.7983 | | 0.4033 | 38.0 | 14250 | 0.5506 | 0.7767 | | 0.4228 | 39.0 | 14625 | 0.5150 | 0.7917 | | 0.366 | 40.0 | 15000 | 0.5580 | 0.8017 | | 0.3549 | 41.0 | 15375 | 0.5360 | 0.8067 | | 0.3677 | 42.0 | 15750 | 0.5521 | 0.8 | | 0.4255 | 43.0 | 16125 | 0.5412 | 0.8033 | | 0.355 | 44.0 | 16500 | 0.5640 | 0.7717 | | 0.3586 | 45.0 | 16875 | 0.5441 | 0.7783 | | 0.3404 | 46.0 | 17250 | 0.5592 | 0.7867 | | 0.3867 | 47.0 | 17625 | 0.5593 | 0.8 | | 0.3586 | 48.0 | 18000 | 0.5571 | 0.8067 | | 0.2696 | 49.0 | 18375 | 0.5541 | 0.8 | | 0.3761 | 50.0 | 18750 | 0.5638 | 0.7967 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.0+cu121 - Datasets 2.12.0 - Tokenizers 0.13.2
pasithbas159/multilabel_transfer_learning_transformer
pasithbas159
"2024-10-11T13:25:44Z"
178
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-large", "base_model:finetune:FacebookAI/xlm-roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-10-11T13:23:50Z"
--- library_name: transformers license: mit base_model: xlm-roberta-large tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: multilabel_transfer_learning_transformer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilabel_transfer_learning_transformer This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0217 - F1: 0.9924 - Roc Auc: 0.9955 - Accuracy: 0.9887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 123 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 300 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | 0.5454 | 1.0 | 136 | 0.4135 | 0.0125 | 0.5030 | 0.0 | | 0.3917 | 2.0 | 272 | 0.3582 | 0.2939 | 0.5855 | 0.0338 | | 0.3405 | 3.0 | 408 | 0.3048 | 0.4862 | 0.6649 | 0.0827 | | 0.2918 | 4.0 | 544 | 0.2753 | 0.5913 | 0.7250 | 0.1278 | | 0.2531 | 5.0 | 680 | 0.2285 | 0.7261 | 0.8065 | 0.2406 | | 0.214 | 6.0 | 816 | 0.1971 | 0.7684 | 0.8328 | 0.3233 | | 0.181 | 7.0 | 952 | 0.1663 | 0.8199 | 0.8624 | 0.4173 | | 0.1529 | 8.0 | 1088 | 0.1431 | 0.8591 | 0.8905 | 0.4774 | | 0.1307 | 9.0 | 1224 | 0.1224 | 0.8979 | 0.9260 | 0.6090 | | 0.1108 | 10.0 | 1360 | 0.1034 | 0.9195 | 0.9329 | 0.6955 | | 0.0961 | 11.0 | 1496 | 0.0920 | 0.9435 | 0.9553 | 0.7744 | | 0.0821 | 12.0 | 1632 | 0.0793 | 0.9559 | 0.9627 | 0.8346 | | 0.0719 | 13.0 | 1768 | 0.0682 | 0.9636 | 0.9732 | 0.8759 | | 0.0612 | 14.0 | 1904 | 0.0618 | 0.9651 | 0.9760 | 0.8947 | | 0.0526 | 15.0 | 2040 | 0.0519 | 0.9757 | 0.9796 | 0.9135 | | 0.0456 | 16.0 | 2176 | 0.0468 | 0.9778 | 0.9835 | 0.9248 | | 0.0394 | 17.0 | 2312 | 0.0396 | 0.9854 | 0.9885 | 0.9586 | | 0.0343 | 18.0 | 2448 | 0.0372 | 0.9855 | 0.9911 | 0.9586 | | 0.0299 | 19.0 | 2584 | 0.0329 | 0.9854 | 0.9885 | 0.9586 | | 0.0266 | 20.0 | 2720 | 0.0289 | 0.9887 | 0.9932 | 0.9887 | | 0.0233 | 21.0 | 2856 | 0.0264 | 0.9874 | 0.9919 | 0.9812 | | 0.0212 | 22.0 | 2992 | 0.0258 | 0.9887 | 0.9932 | 0.9887 | | 0.02 | 23.0 | 3128 | 0.0242 | 0.9887 | 0.9932 | 0.9887 | | 0.0177 | 24.0 | 3264 | 0.0217 | 0.9924 | 0.9955 | 0.9887 | | 0.0162 | 25.0 | 3400 | 0.0200 | 0.9887 | 0.9932 | 0.9887 | | 0.0146 | 26.0 | 3536 | 0.0201 | 0.9906 | 0.9951 | 0.9887 | | 0.0136 | 27.0 | 3672 | 0.0192 | 0.9906 | 0.9951 | 0.9887 | | 0.0127 | 28.0 | 3808 | 0.0169 | 0.9924 | 0.9955 | 0.9887 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
aipib/Florence-2-OCRJP-remake3
aipib
"2025-04-07T12:45:32Z"
68
0
null
[ "safetensors", "florence2", "custom_code", "license:mit", "region:us" ]
null
"2025-04-06T00:09:29Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
EVA-UNIT-01/EVA-Qwen2.5-32B-v0.1
EVA-UNIT-01
"2024-11-01T16:28:32Z"
1,046
14
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:Nopm/Opus_WritingStruct", "dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned", "dataset:Gryphe/Sonnet3.5-Charcard-Roleplay", "dataset:Gryphe/ChatGPT-4o-Writing-Prompts", "dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned", "dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned", "dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts", "dataset:allura-org/Celeste-1.x-data-mixture", "dataset:cognitivecomputations/dolphin-2.9.3", "base_model:Qwen/Qwen2.5-32B", "base_model:finetune:Qwen/Qwen2.5-32B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-10-31T05:13:25Z"
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-32B datasets: - anthracite-org/kalo-opus-instruct-22k-no-refusal - Nopm/Opus_WritingStruct - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned - Gryphe/Sonnet3.5-Charcard-Roleplay - Gryphe/ChatGPT-4o-Writing-Prompts - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - nothingiisreal/Reddit-Dirty-And-WritingPrompts - allura-org/Celeste-1.x-data-mixture - cognitivecomputations/dolphin-2.9.3 tags: - generated_from_trainer model-index: - name: EVA-Qwen2.5-32B-SFFT-v0.1 results: [] --- # EVA Qwen2.5-32B v0.1 <p> A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-32B on mixture of synthetic and natural data.<br> It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br> </p> <p>This model is available for inference on <a href=https://featherless.ai/models/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.1>FeatherlessAI</a></p> <p>Dedicated to Nev.</p> <p><b>Version notes for 0.1</b>: Additional round of cleaning for the datasets, new subsets of 4o-WritingPrompts and Charcards, picking the most diverse samples from them, plus added a small subset of SystemChat2.0 to improve instruction following and sliglthy increased sequence length. Additionally, fixed the training config mistake from 32B 0.0, layernorm layers stay frozen this time. Unfreezing them caused positivity bias to appear in 32B 0.0 for some reason.</p> <p> <p>Prompt format is ChatML.</p><br> <h3>Recommended sampler values:</h3> <ul> <li>Temperature: 1</li> <li>Min-P: 0.05</li> <li>Top-A: 0.2</li> <li>Repetition Penalty: 1.03</li> </ul> <h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3> - [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json) - [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json) </p> <p> <br> <h3> Training data: </h3> <ul> <li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li> <li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li> <li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li> <li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li> <li>Synthstruct and SynthRP datasets by Epiculous</li> <li>A subset from Dolphin-2.9.3, including filtered version of not_samantha and a small subset of systemchat.</li> </ul> <h3> Training time and hardware: </h3> <ul><li>7 hours on 8xH100 SXM, provided by <a href=https://featherless.ai/>FeatherlessAI</a></li></ul><br> </p> <p>Model was trained by Kearm and Auri.</p> <h4>Special thanks:</h4><ul> <li><b>to <a href=https://featherless.ai/>FeatherlessAI</a> for generously providing 8xH100 SXM node for training of this model</b></li> <li>to Gryphe, Lemmy, Kalomaze, Nopm, Epiculous and CogninitiveComputations for the data</li> <li>and to Allura-org for support, feedback, beta-testing and doing quality control of EVA models.</li></ul> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: Qwen/Qwen2.5-32B load_in_8bit: false load_in_4bit: false strict: false plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_swiglu: true liger_fused_linear_cross_entropy: true # plugins: # - axolotl.integrations.spectrum.SpectrumPlugin # spectrum_top_fraction: 0.5 # # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror # spectrum_model_name: Qwen/Qwen2.5-32B datasets: - path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl type: sharegpt - path: datasets/opus-instruct-22k-no_refusals-filtered.jsonl type: sharegpt - path: datasets/Celeste_Filtered.jsonl type: sharegpt - path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt.jsonl type: sharegpt - path: datasets/deduped_SynthRP-Gens_processed_09-25-2024-ShareGPT_converted_cleaned.jsonl type: sharegpt - path: datasets/Gryphe-4o-WP-filtered-sharegpt.jsonl type: sharegpt - path: datasets/deduped_not_samantha_norefusals.jsonl type: sharegpt - path: datasets/SystemChat_subset_filtered_sharegpt.jsonl type: sharegpt chat_template: chatml shuffle_merged_datasets: true val_set_size: 0.001 output_dir: ./EVA-Qwen2.5-32B-SFFT-v0.1 sequence_len: 9216 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true # adapter: qlora # lora_model_dir: # lora_r: 64 # lora_alpha: 128 # lora_dropout: 0.05 # lora_target_linear: true # peft_use_dora: true unfrozen_parameters: - ^lm_head.weight$ - ^model.embed_tokens.weight$ # mlp.down_proj layers - model.layers.63.mlp.down_proj - model.layers.49.mlp.down_proj - model.layers.48.mlp.down_proj - model.layers.45.mlp.down_proj - model.layers.44.mlp.down_proj - model.layers.47.mlp.down_proj - model.layers.46.mlp.down_proj - model.layers.43.mlp.down_proj - model.layers.8.mlp.down_proj - model.layers.11.mlp.down_proj - model.layers.19.mlp.down_proj - model.layers.35.mlp.down_proj - model.layers.20.mlp.down_proj - model.layers.52.mlp.down_proj - model.layers.39.mlp.down_proj - model.layers.62.mlp.down_proj - model.layers.50.mlp.down_proj - model.layers.29.mlp.down_proj - model.layers.16.mlp.down_proj - model.layers.28.mlp.down_proj - model.layers.53.mlp.down_proj - model.layers.30.mlp.down_proj - model.layers.31.mlp.down_proj - model.layers.32.mlp.down_proj - model.layers.7.mlp.down_proj - model.layers.36.mlp.down_proj - model.layers.12.mlp.down_proj - model.layers.18.mlp.down_proj - model.layers.37.mlp.down_proj - model.layers.38.mlp.down_proj - model.layers.14.mlp.down_proj - model.layers.13.mlp.down_proj # mlp.gate_proj layers - model.layers.43.mlp.gate_proj - model.layers.61.mlp.gate_proj - model.layers.60.mlp.gate_proj - model.layers.44.mlp.gate_proj - model.layers.62.mlp.gate_proj - model.layers.28.mlp.gate_proj - model.layers.29.mlp.gate_proj - model.layers.45.mlp.gate_proj - model.layers.37.mlp.gate_proj - model.layers.35.mlp.gate_proj - model.layers.59.mlp.gate_proj - model.layers.36.mlp.gate_proj - model.layers.30.mlp.gate_proj - model.layers.48.mlp.gate_proj - model.layers.38.mlp.gate_proj - model.layers.27.mlp.gate_proj - model.layers.31.mlp.gate_proj - model.layers.34.mlp.gate_proj - model.layers.58.mlp.gate_proj - model.layers.33.mlp.gate_proj - model.layers.39.mlp.gate_proj - model.layers.26.mlp.gate_proj - model.layers.32.mlp.gate_proj - model.layers.46.mlp.gate_proj - model.layers.42.mlp.gate_proj - model.layers.49.mlp.gate_proj - model.layers.57.mlp.gate_proj - model.layers.50.mlp.gate_proj - model.layers.47.mlp.gate_proj - model.layers.56.mlp.gate_proj - model.layers.63.mlp.gate_proj - model.layers.55.mlp.gate_proj # mlp.up_proj layers - model.layers.61.mlp.up_proj - model.layers.60.mlp.up_proj - model.layers.32.mlp.up_proj - model.layers.59.mlp.up_proj - model.layers.58.mlp.up_proj - model.layers.57.mlp.up_proj - model.layers.44.mlp.up_proj - model.layers.28.mlp.up_proj - model.layers.35.mlp.up_proj - model.layers.36.mlp.up_proj - model.layers.29.mlp.up_proj - model.layers.31.mlp.up_proj - model.layers.34.mlp.up_proj - model.layers.55.mlp.up_proj - model.layers.49.mlp.up_proj - model.layers.30.mlp.up_proj - model.layers.53.mlp.up_proj - model.layers.43.mlp.up_proj - model.layers.56.mlp.up_proj - model.layers.33.mlp.up_proj - model.layers.54.mlp.up_proj - model.layers.62.mlp.up_proj - model.layers.27.mlp.up_proj - model.layers.51.mlp.up_proj - model.layers.52.mlp.up_proj - model.layers.37.mlp.up_proj - model.layers.45.mlp.up_proj - model.layers.26.mlp.up_proj - model.layers.42.mlp.up_proj - model.layers.50.mlp.up_proj - model.layers.48.mlp.up_proj - model.layers.39.mlp.up_proj # self_attn.k_proj layers - model.layers.63.self_attn.k_proj - model.layers.55.self_attn.k_proj - model.layers.60.self_attn.k_proj - model.layers.7.self_attn.k_proj - model.layers.12.self_attn.k_proj - model.layers.13.self_attn.k_proj - model.layers.57.self_attn.k_proj - model.layers.29.self_attn.k_proj - model.layers.14.self_attn.k_proj - model.layers.51.self_attn.k_proj - model.layers.53.self_attn.k_proj - model.layers.54.self_attn.k_proj - model.layers.22.self_attn.k_proj - model.layers.61.self_attn.k_proj - model.layers.18.self_attn.k_proj - model.layers.30.self_attn.k_proj - model.layers.9.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.25.self_attn.k_proj - model.layers.10.self_attn.k_proj - model.layers.58.self_attn.k_proj - model.layers.56.self_attn.k_proj - model.layers.15.self_attn.k_proj - model.layers.32.self_attn.k_proj - model.layers.28.self_attn.k_proj - model.layers.8.self_attn.k_proj - model.layers.59.self_attn.k_proj - model.layers.11.self_attn.k_proj - model.layers.48.self_attn.k_proj - model.layers.16.self_attn.k_proj - model.layers.50.self_attn.k_proj # self_attn.o_proj layers - model.layers.15.self_attn.o_proj - model.layers.23.self_attn.o_proj - model.layers.31.self_attn.o_proj - model.layers.30.self_attn.o_proj - model.layers.18.self_attn.o_proj - model.layers.24.self_attn.o_proj - model.layers.17.self_attn.o_proj - model.layers.28.self_attn.o_proj - model.layers.34.self_attn.o_proj - model.layers.33.self_attn.o_proj - model.layers.25.self_attn.o_proj - model.layers.12.self_attn.o_proj - model.layers.14.self_attn.o_proj - model.layers.29.self_attn.o_proj - model.layers.16.self_attn.o_proj - model.layers.26.self_attn.o_proj - model.layers.22.self_attn.o_proj - model.layers.27.self_attn.o_proj - model.layers.35.self_attn.o_proj - model.layers.20.self_attn.o_proj - model.layers.13.self_attn.o_proj - model.layers.36.self_attn.o_proj - model.layers.19.self_attn.o_proj - model.layers.37.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.11.self_attn.o_proj - model.layers.54.self_attn.o_proj - model.layers.5.self_attn.o_proj - model.layers.38.self_attn.o_proj - model.layers.6.self_attn.o_proj - model.layers.8.self_attn.o_proj - model.layers.9.self_attn.o_proj # self_attn.q_proj layers - model.layers.1.self_attn.q_proj - model.layers.2.self_attn.q_proj - model.layers.3.self_attn.q_proj - model.layers.45.self_attn.q_proj - model.layers.54.self_attn.q_proj - model.layers.35.self_attn.q_proj - model.layers.48.self_attn.q_proj - model.layers.61.self_attn.q_proj - model.layers.52.self_attn.q_proj - model.layers.50.self_attn.q_proj - model.layers.60.self_attn.q_proj - model.layers.56.self_attn.q_proj - model.layers.58.self_attn.q_proj - model.layers.42.self_attn.q_proj - model.layers.59.self_attn.q_proj - model.layers.44.self_attn.q_proj - model.layers.55.self_attn.q_proj - model.layers.57.self_attn.q_proj - model.layers.41.self_attn.q_proj - model.layers.36.self_attn.q_proj - model.layers.39.self_attn.q_proj - model.layers.4.self_attn.q_proj - model.layers.43.self_attn.q_proj - model.layers.34.self_attn.q_proj - model.layers.46.self_attn.q_proj - model.layers.49.self_attn.q_proj - model.layers.40.self_attn.q_proj - model.layers.25.self_attn.q_proj - model.layers.51.self_attn.q_proj - model.layers.17.self_attn.q_proj - model.layers.37.self_attn.q_proj - model.layers.53.self_attn.q_proj # self_attn.v_proj layers - model.layers.55.self_attn.v_proj - model.layers.31.self_attn.v_proj - model.layers.47.self_attn.v_proj - model.layers.45.self_attn.v_proj - model.layers.49.self_attn.v_proj - model.layers.48.self_attn.v_proj - model.layers.15.self_attn.v_proj - model.layers.30.self_attn.v_proj - model.layers.7.self_attn.v_proj - model.layers.44.self_attn.v_proj - model.layers.29.self_attn.v_proj - model.layers.51.self_attn.v_proj - model.layers.50.self_attn.v_proj - model.layers.14.self_attn.v_proj - model.layers.54.self_attn.v_proj - model.layers.32.self_attn.v_proj - model.layers.43.self_attn.v_proj - model.layers.10.self_attn.v_proj - model.layers.46.self_attn.v_proj - model.layers.38.self_attn.v_proj - model.layers.57.self_attn.v_proj - model.layers.22.self_attn.v_proj - model.layers.39.self_attn.v_proj - model.layers.6.self_attn.v_proj - model.layers.23.self_attn.v_proj - model.layers.58.self_attn.v_proj - model.layers.53.self_attn.v_proj - model.layers.40.self_attn.v_proj - model.layers.24.self_attn.v_proj - model.layers.9.self_attn.v_proj - model.layers.25.self_attn.v_proj - model.layers.5.self_attn.v_proj wandb_project: EVA-Qwen2.5-32B-SFFT-v0.1 wandb_entity: wandb_watch: wandb_name: Unit-01 wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 3 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.00005 max_grad_norm: 3 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: "unsloth" # gradient_checkpointing_kwargs: # use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 20 evals_per_epoch: 4 saves_per_epoch: 2 save_safetensors: true hub_model_id: hub_strategy: debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.1 # fsdp: # - full_shard # - auto_wrap # fsdp_config: # fsdp_limit_all_gathers: true # fsdp_sync_module_states: false # fsdp_offload_params: true # fsdp_cpu_ram_efficient_loading: true # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP # fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer # fsdp_activation_checkpointing: true # fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT # fsdp_sharding_strategy: FULL_SHARD # fsdp_forward_prefetch: false # Added # fsdp_backward_prefetch: "BACKWARD_PRE" # Added # fsdp_backward_prefetch_limit: 1 # Added # fsdp_mixed_precision: BF16 # Added ``` </details>
sail-rvc/Penny_Fitzgerald__Latino___RVC_V2_-_300_Epochs_
sail-rvc
"2023-07-14T07:30:00Z"
1
0
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
"2023-07-14T07:29:32Z"
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # Penny_Fitzgerald__Latino___RVC_V2_-_300_Epochs_ ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:30:00 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
irispansee/image_classification
irispansee
"2023-09-18T15:33:18Z"
173
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-09-18T07:09:58Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: image_classification results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.3375 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # image_classification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.8157 - Accuracy: 0.3375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 20 | 2.0226 | 0.2625 | | No log | 2.0 | 40 | 1.8855 | 0.2938 | | No log | 3.0 | 60 | 1.8171 | 0.35 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
Charan-2714M/llama3-8b-instruct-ipc-sections
Charan-2714M
"2025-01-03T06:28:19Z"
75
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2025-01-02T15:45:32Z"
--- library_name: transformers tags: [text-generation] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Fireworks12/git-base-pokemon
Fireworks12
"2023-07-14T02:49:39Z"
37
0
transformers
[ "transformers", "pytorch", "tensorboard", "git", "image-text-to-text", "generated_from_trainer", "vision", "image-to-text", "image-captioning", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-to-text
"2023-07-12T16:24:59Z"
--- license: mit tags: - generated_from_trainer - vision - image-to-text - image-captioning datasets: - imagefolder model-index: - name: git-base-pokemon results: [] pipeline_tag: image-to-text --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-pokemon This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1817 - Wer Score: 9.0938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | |:-------------:|:-----:|:----:|:---------------:|:---------:| | 7.3974 | 0.7 | 50 | 4.5248 | 4.5234 | | 2.2794 | 1.4 | 100 | 0.4021 | 5.1680 | | 0.1697 | 2.1 | 150 | 0.1398 | 1.5039 | | 0.0816 | 2.8 | 200 | 0.1458 | 9.9570 | | 0.0556 | 3.5 | 250 | 0.1417 | 2.5234 | | 0.043 | 4.2 | 300 | 0.1448 | 12.8086 | | 0.0285 | 4.9 | 350 | 0.1469 | 7.3867 | | 0.021 | 5.59 | 400 | 0.1505 | 13.0312 | | 0.0205 | 6.29 | 450 | 0.1499 | 6.3281 | | 0.0179 | 6.99 | 500 | 0.1527 | 13.0234 | | 0.0157 | 7.69 | 550 | 0.1552 | 6.3047 | | 0.015 | 8.39 | 600 | 0.1571 | 6.7656 | | 0.015 | 9.09 | 650 | 0.1579 | 10.2305 | | 0.0137 | 9.79 | 700 | 0.1585 | 11.4219 | | 0.0132 | 10.49 | 750 | 0.1598 | 5.8320 | | 0.0132 | 11.19 | 800 | 0.1591 | 12.0508 | | 0.013 | 11.89 | 850 | 0.1612 | 7.9492 | | 0.0117 | 12.59 | 900 | 0.1621 | 8.1758 | | 0.0123 | 13.29 | 950 | 0.1632 | 12.9961 | | 0.0125 | 13.99 | 1000 | 0.1613 | 10.2031 | | 0.0116 | 14.69 | 1050 | 0.1642 | 5.7930 | | 0.0112 | 15.38 | 1100 | 0.1636 | 6.1719 | | 0.0112 | 16.08 | 1150 | 0.1652 | 7.2422 | | 0.0107 | 16.78 | 1200 | 0.1644 | 12.9961 | | 0.0108 | 17.48 | 1250 | 0.1661 | 5.0117 | | 0.0109 | 18.18 | 1300 | 0.1658 | 7.3242 | | 0.0108 | 18.88 | 1350 | 0.1691 | 6.0547 | | 0.0101 | 19.58 | 1400 | 0.1690 | 6.9141 | | 0.0103 | 20.28 | 1450 | 0.1692 | 7.1680 | | 0.0107 | 20.98 | 1500 | 0.1702 | 12.3281 | | 0.0099 | 21.68 | 1550 | 0.1708 | 10.75 | | 0.0103 | 22.38 | 1600 | 0.1714 | 9.5586 | | 0.0101 | 23.08 | 1650 | 0.1713 | 12.9805 | | 0.0098 | 23.78 | 1700 | 0.1712 | 11.4883 | | 0.0095 | 24.48 | 1750 | 0.1711 | 9.3320 | | 0.0096 | 25.17 | 1800 | 0.1738 | 8.6523 | | 0.0097 | 25.87 | 1850 | 0.1717 | 11.5078 | | 0.0091 | 26.57 | 1900 | 0.1735 | 7.9570 | | 0.0092 | 27.27 | 1950 | 0.1729 | 9.8242 | | 0.0093 | 27.97 | 2000 | 0.1721 | 10.5078 | | 0.0087 | 28.67 | 2050 | 0.1732 | 9.3906 | | 0.009 | 29.37 | 2100 | 0.1760 | 8.0664 | | 0.009 | 30.07 | 2150 | 0.1769 | 10.5312 | | 0.0086 | 30.77 | 2200 | 0.1743 | 10.8555 | | 0.0087 | 31.47 | 2250 | 0.1772 | 10.2188 | | 0.0089 | 32.17 | 2300 | 0.1757 | 11.6016 | | 0.0088 | 32.87 | 2350 | 0.1765 | 8.9297 | | 0.0082 | 33.57 | 2400 | 0.1754 | 9.6484 | | 0.0082 | 34.27 | 2450 | 0.1770 | 12.3711 | | 0.0084 | 34.97 | 2500 | 0.1761 | 10.1523 | | 0.0076 | 35.66 | 2550 | 0.1774 | 9.1055 | | 0.0077 | 36.36 | 2600 | 0.1788 | 8.7852 | | 0.0079 | 37.06 | 2650 | 0.1782 | 11.8086 | | 0.0071 | 37.76 | 2700 | 0.1784 | 10.5234 | | 0.0075 | 38.46 | 2750 | 0.1789 | 8.8828 | | 0.0072 | 39.16 | 2800 | 0.1796 | 8.5664 | | 0.0071 | 39.86 | 2850 | 0.1804 | 9.5391 | | 0.0069 | 40.56 | 2900 | 0.1796 | 9.4062 | | 0.0068 | 41.26 | 2950 | 0.1797 | 8.9883 | | 0.0067 | 41.96 | 3000 | 0.1809 | 10.5273 | | 0.0062 | 42.66 | 3050 | 0.1801 | 10.4531 | | 0.0062 | 43.36 | 3100 | 0.1803 | 7.2188 | | 0.0063 | 44.06 | 3150 | 0.1808 | 8.7930 | | 0.0058 | 44.76 | 3200 | 0.1804 | 10.5156 | | 0.0057 | 45.45 | 3250 | 0.1807 | 11.1328 | | 0.0059 | 46.15 | 3300 | 0.1812 | 8.6875 | | 0.0055 | 46.85 | 3350 | 0.1811 | 10.2773 | | 0.0053 | 47.55 | 3400 | 0.1814 | 10.0391 | | 0.0054 | 48.25 | 3450 | 0.1817 | 8.5391 | | 0.0053 | 48.95 | 3500 | 0.1818 | 8.9688 | | 0.005 | 49.65 | 3550 | 0.1817 | 9.0938 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
TahaBa/ppo-Huggy
TahaBa
"2024-07-31T15:32:17Z"
23
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
"2024-07-31T15:32:03Z"
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: TahaBa/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
DomainInsAdap/Meta-Llama-3.1-8B-Instruct-music-tree-all-None-5-2e-05-epoch-2
DomainInsAdap
"2025-02-16T21:03:56Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-02-05T01:57:36Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Akshay0706/All-Plants-18-Epochs-Model
Akshay0706
"2023-11-19T17:33:59Z"
6
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:image_folder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-11-19T17:33:32Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - image_folder metrics: - accuracy - f1 model-index: - name: All-Plants-18-Epochs-Model results: - task: name: Image Classification type: image-classification dataset: name: image_folder type: image_folder config: Dataset split: train args: Dataset metrics: - name: Accuracy type: accuracy value: 0.9847645429362881 - name: F1 type: f1 value: 0.984922643975302 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # All-Plants-18-Epochs-Model This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.0888 - Accuracy: 0.9848 - F1: 0.9849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 18 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.9212 | 1.0 | 407 | 0.3931 | 0.9501 | 0.9579 | | 0.2659 | 2.0 | 814 | 0.2176 | 0.9668 | 0.9674 | | 0.137 | 3.0 | 1221 | 0.1481 | 0.9723 | 0.9731 | | 0.0865 | 4.0 | 1628 | 0.1043 | 0.9834 | 0.9836 | | 0.0557 | 5.0 | 2035 | 0.0888 | 0.9848 | 0.9849 | | 0.0408 | 6.0 | 2442 | 0.0839 | 0.9848 | 0.9848 | | 0.0289 | 7.0 | 2849 | 0.0920 | 0.9848 | 0.9849 | | 0.0229 | 8.0 | 3256 | 0.0817 | 0.9834 | 0.9837 | | 0.0175 | 9.0 | 3663 | 0.0890 | 0.9820 | 0.9823 | | 0.0156 | 10.0 | 4070 | 0.0966 | 0.9820 | 0.9823 | | 0.0121 | 11.0 | 4477 | 0.0809 | 0.9834 | 0.9837 | | 0.0102 | 12.0 | 4884 | 0.0875 | 0.9820 | 0.9823 | | 0.0086 | 13.0 | 5291 | 0.0873 | 0.9820 | 0.9823 | | 0.0077 | 14.0 | 5698 | 0.0860 | 0.9820 | 0.9823 | | 0.0068 | 15.0 | 6105 | 0.0876 | 0.9820 | 0.9823 | | 0.0062 | 16.0 | 6512 | 0.0896 | 0.9820 | 0.9823 | | 0.0059 | 17.0 | 6919 | 0.0890 | 0.9820 | 0.9823 | | 0.0056 | 18.0 | 7326 | 0.0894 | 0.9820 | 0.9823 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.14.1
Clare3Dx/3DX
Clare3Dx
"2022-10-09T13:22:16Z"
0
0
null
[ "license:cc-by-nc-sa-4.0", "region:us" ]
null
"2022-10-09T13:22:16Z"
--- license: cc-by-nc-sa-4.0 ---
atsuki-yamaguchi/gemma-2-9b-si-30K-mean
atsuki-yamaguchi
"2024-09-17T09:31:57Z"
7
0
transformers
[ "transformers", "safetensors", "gemma2", "si", "arxiv:2406.11477", "base_model:google/gemma-2-9b", "base_model:finetune:google/gemma-2-9b", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
"2024-09-06T16:13:09Z"
--- license: gemma language: - si base_model: google/gemma-2-9b library_name: transformers --- # Gemma2 9B for Sinhala: 100 target vocabulary size + Mean target vocabulary initialization + 2x2LS/MTP/512 training This model is built on top of Gemma2 9B adapted for Sinhala using 30K target language sentences sampled from CC-100. ## Model Details * **Vocabulary**: This model has an additional 100 target vocabulary. * **Target vocabulary initialization**: The target weights of the embedding were initialized using Mean initialization. * **Training**: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the 2x2LS/MTP/512 strategies introduced in the paper. ## Model Description - **Language:** Sinhala - **License:** Gemma Terms of Use - **Fine-tuned from model:** google/gemma-2-9b ## Model Sources - **Repository:** https://github.com/gucci-j/lowres-cve - **Paper:** https://arxiv.org/abs/2406.11477 ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained( "atsuki-yamaguchi/gemma-2-9b-si-30K-mean" ) tokenizer = AutoTokenizer.from_pretrained( "atsuki-yamaguchi/gemma-2-9b-si-30K-mean" ) ``` ## Citation ``` @article{yamaguchi-etal-2024-effectively, title={How Can We Effectively Expand the Vocabulary of LLMs with 0.01GB of Target Language Text?}, author={Atsuki Yamaguchi and Aline Villavicencio and Nikolaos Aletras}, year={2024}, journal={ArXiv}, year={2024}, volume={abs/2406.11477}, url={https://arxiv.org/abs/2406.11477}, } ```
unstpb-nlp/llama-3-8b-ft-cwi-2018-en
unstpb-nlp
"2024-06-15T15:45:30Z"
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-15T15:36:26Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TroyDoesAI/Merge1
TroyDoesAI
"2024-07-14T17:15:46Z"
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:TroyDoesAI/Abliterated-Merge-WIP", "base_model:merge:TroyDoesAI/Abliterated-Merge-WIP", "base_model:TroyDoesAI/Abliterated-Merge-WIP-3", "base_model:merge:TroyDoesAI/Abliterated-Merge-WIP-3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-14T17:09:15Z"
--- base_model: - TroyDoesAI/Abliterated-Merge-WIP - TroyDoesAI/Abliterated-Merge-WIP-3 library_name: transformers tags: - mergekit - merge --- # BlackSheep-AbliteratedDareDevil-cot-sydney-cot This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [TroyDoesAI/Abliterated-Merge-WIP](https://huggingface.co/TroyDoesAI/Abliterated-Merge-WIP) * [TroyDoesAI/Abliterated-Merge-WIP-3](https://huggingface.co/TroyDoesAI/Abliterated-Merge-WIP-3) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: TroyDoesAI/Abliterated-Merge-WIP layer_range: [0, 32] - model: TroyDoesAI/Abliterated-Merge-WIP-3 layer_range: [0, 32] merge_method: slerp base_model: TroyDoesAI/Abliterated-Merge-WIP parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: float16 ```
venciso/my_awesome_eli5_mlm_model
venciso
"2023-06-07T11:02:35Z"
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-06-07T10:15:32Z"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: my_awesome_eli5_mlm_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_mlm_model This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 2.0398 - eval_runtime: 14.7181 - eval_samples_per_second: 153.077 - eval_steps_per_second: 19.16 - epoch: 0.03 - step: 37 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
Sharpaxis/News_classification_distilbert
Sharpaxis
"2024-12-06T18:49:03Z"
132
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-06T18:19:54Z"
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - f1 model-index: - name: News_classification_distilbert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # News_classification_distilbert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1001 - F1: 0.9786 - Acc: 0.9795 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Acc | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 0.1312 | 1.0 | 3868 | 0.0919 | 0.9760 | 0.9771 | | 0.063 | 2.0 | 7736 | 0.0884 | 0.9785 | 0.9794 | | 0.0396 | 3.0 | 11604 | 0.1001 | 0.9786 | 0.9795 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.0 - Datasets 3.1.0 - Tokenizers 0.20.3
polyconnect/Reinforce-Pixelcopter-PLE-v0
polyconnect
"2024-06-15T18:24:38Z"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2024-06-12T14:28:49Z"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 43.30 +/- 30.27 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf
RichardErkhov
"2024-10-15T21:55:33Z"
8
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-10-15T21:27:37Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TinyLlama1B-selector-and-SQL-v1-all-16500steps - GGUF - Model creator: https://huggingface.co/NESPED-GEN/ - Original model: https://huggingface.co/NESPED-GEN/TinyLlama1B-selector-and-SQL-v1-all-16500steps/ | Name | Quant method | Size | | ---- | ---- | ---- | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q2_K.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q2_K.gguf) | Q2_K | 0.4GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.IQ3_S.gguf) | IQ3_S | 0.47GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.IQ3_M.gguf) | IQ3_M | 0.48GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q3_K.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q3_K.gguf) | Q3_K | 0.51GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q4_0.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q4_0.gguf) | Q4_0 | 0.59GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q4_K.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q4_K.gguf) | Q4_K | 0.62GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q4_1.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q4_1.gguf) | Q4_1 | 0.65GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q5_0.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q5_0.gguf) | Q5_0 | 0.71GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q5_K.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q5_K.gguf) | Q5_K | 0.73GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q5_1.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q5_1.gguf) | Q5_1 | 0.77GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q6_K.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q6_K.gguf) | Q6_K | 0.84GB | | [TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q8_0.gguf](https://huggingface.co/RichardErkhov/NESPED-GEN_-_TinyLlama1B-selector-and-SQL-v1-all-16500steps-gguf/blob/main/TinyLlama1B-selector-and-SQL-v1-all-16500steps.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tuanna08go/5c852326-eaa5-42c1-80b6-a7e40283f0c1
tuanna08go
"2025-01-22T15:04:31Z"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Llama-3.2-1B", "base_model:adapter:NousResearch/Llama-3.2-1B", "license:llama3.2", "region:us" ]
null
"2025-01-22T15:01:10Z"
--- library_name: peft license: llama3.2 base_model: NousResearch/Llama-3.2-1B tags: - axolotl - generated_from_trainer model-index: - name: 5c852326-eaa5-42c1-80b6-a7e40283f0c1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Llama-3.2-1B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 4c221d5c2601a459_train_data.json ds_type: json format: custom path: /workspace/input_data/4c221d5c2601a459_train_data.json type: field_input: Paper Finding field_instruction: News Context field_output: News Finding format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 5 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: tuanna08go/5c852326-eaa5-42c1-80b6-a7e40283f0c1 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 5 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 50 micro_batch_size: 2 mlflow_experiment_name: /tmp/4c221d5c2601a459_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: <|end_of_text|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: e317de3c-caa4-44e7-b5dd-41c531a9f8de wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: e317de3c-caa4-44e7-b5dd-41c531a9f8de warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 5c852326-eaa5-42c1-80b6-a7e40283f0c1 This model is a fine-tuned version of [NousResearch/Llama-3.2-1B](https://huggingface.co/NousResearch/Llama-3.2-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0014 | 1 | 0.7443 | | 0.7488 | 0.0142 | 10 | 0.5939 | | 0.2374 | 0.0284 | 20 | 0.1674 | | 0.125 | 0.0426 | 30 | 0.1194 | | 0.0732 | 0.0569 | 40 | 0.0924 | | 0.1052 | 0.0711 | 50 | 0.0904 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
golaxy/gogpt2-7b
golaxy
"2023-07-31T09:36:45Z"
1,446
7
transformers
[ "transformers", "pytorch", "tensorboard", "llama", "text-generation", "llama2", "chinese-llama2", "gogpt2-7b", "zh", "dataset:BelleGroup/train_0.5M_CN", "dataset:BelleGroup/train_1M_CN", "dataset:c-s-ale/alpaca-gpt4-data-zh", "dataset:BAAI/COIG", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-21T09:52:20Z"
--- license: apache-2.0 datasets: - BelleGroup/train_0.5M_CN - BelleGroup/train_1M_CN - c-s-ale/alpaca-gpt4-data-zh - BAAI/COIG language: - zh tags: - llama2 - chinese-llama2 - gogpt2-7b --- # GoGPT2-7B: 基于Llama2-7b训练的中英文增强大模型 ![Chinese LLaMA2 7B](./assets/gogpt-banner-tou.png) <p align="center"> <img alt="GitHub" src="https://img.shields.io/github/license/ymcui/Chinese-LLaMA-Alpaca.svg?color=blue&style=flat-square"> <img alt="GitHub top language" src="https://img.shields.io/github/languages/top/ymcui/Chinese-LLaMA-Alpaca"> </p> > ICT中英文底座增强大模型:70亿参数、130亿参数 🤗Huggingface上提供了GoGPT权重,目前开放了gogpt-7b和gogpt2-7b权重 | 模型名称 | 基座模型 | 模型大小 | 下载地址 | |-------------------------------------------------------------|-----------|------|-------------------------------------------------| | [golaxy/gogpt-7b](https://huggingface.co/golaxy/gogpt-7b) | Llama-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt-7b) | | [golaxy/gogpt2-7b](https://huggingface.co/golaxy/gogpt2-7b) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b) | | [golaxy/gogpt2-7b-pretrain](https://huggingface.co/golaxy/gogpt2-7b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b-pretrain) | | [golaxy/gogpt2-13b-pretrain](https://huggingface.co/golaxy/gogpt2-13b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-13b-pretrain) | [GoGPT-Github](https://github.com/yanqiangmiffy/GoGPT) ## 🚀step1:训练分词器 [🐱怎么从零到一训练一个LLM分词器](https://github.com/yanqiangmiffy/how-to-train-tokenizer) ```text ├── data │ └── corpus.txt 训练语料 ├── llama │ ├── tokenizer_checklist.chk │ └── tokenizer.model ├── merged_tokenizer_hf 合并结果 hf格式 │ ├── special_tokens_map.json │ ├── tokenizer_config.json │ └── tokenizer.model ├── merged_tokenizer_sp │ └── open_llama.model # ├── merge_tokenizer │ └── tokenizer.model ├── open_llama.model 训练的sp模型 ├── open_llama.vocab 训练的sp词汇表 ├── README.md ├── step0_step0_process_text.py 基于多分数据集准备训练语料 ├── step1_make_corpus.py 基于中文Wikipedia数据准备训练语料 ├── step2_train_tokenzier.py 训练分词器 ├── step3_tokenzier_segment.py 测试训练后的模型,包括编码和解码测试样例 └── step4_merge_tokenizers.py 与原版llama的分词器进行合并,得到hf格式的tokenizer ``` 原始llama2词表大小**32000**,与40k训练的中文分词模型合并之后词表大小为**68419**,sft添加pad字符之后大小为**68420** ## 🚀step2:二次预训练 > 在中文预训练语料上对LLaMA进行增量预训练、继续预训练,目前训练语料20GB,后续继续迭代更新 ## 🚀step3: 有监督微调 基于多样性的指令数据进行微调,包括belle,alpaca的中英文指令数据以及moss多轮对话数据,完成在120万+条数据的指令微调 - belle数据:120k数据 v1 - stanford_alapca:52k数据 v2 - stanford_alapca_gpt4_zh:52k数据 v2 - [sharegpt](data%2Ffinetune%2Fsharegpt):90k数据 - [fnlp/moss-003-sft-data](https://huggingface.co/datasets/fnlp/moss-003-sft-data):moss多轮对话数据 根据长度(输出长度大约500)采样之后,筛选出11万指令数据进行sft训练 ## 测试效果 - 西瓜和南瓜有什么区别 ``` 1. 外观:西瓜通常是圆形或椭圆形的,有浅黄色或棕色的外皮,内部有红色或黄色的果肉。而南瓜则是一个圆形或椭圆形的,通常有浅黄色或棕色的外皮,内部有橙黄色的果肉。 2. 品种:西瓜主要分为三种品种:小型、中等和巨型。小型西瓜通常只有一个或两个果核,中等西瓜通常有两个或三个果核,巨型西瓜则有四个或更多果核。南瓜主要分为两种品种:南 瓜和萝卜南瓜。南瓜通常有两个或三个果核,萝卜南瓜则有四个或更多果核。 3. 生长环境:西瓜需要温暖的气候,并且需要充足的阳光。它们通常生长在热带或亚热带地区。南瓜则在温带和寒带地区生长。南瓜需要更长的冬季来生长,而西瓜通常在春季和夏季生 长。 4. 营养价值:西瓜和南瓜都是富含维生素C和纤维素的水果。西瓜还富含抗氧化剂和钾,而南瓜富含叶酸和维生素E。 总的来说,西瓜和南瓜是两种完全不同的果类,它们在外观、品种、生长环境和营养价值上有很多区别。 ``` - 如何提高我的时间管理技能? ``` Response: 1. 制定计划和目标:制定每周或每日的计划和目标,以帮助您更好地分配时间。 2. 优先考虑最重要的任务:优先完成最重要的任务,这有助于避免分心和浪费时间。 3. 避免拖延:尝试避免拖延,尽早开始任务,这样您可以更容易地完成它们。 4. 集中注意力:集中注意力,避免同时处理多个任务,这有助于减少分心和提高效率。5. 利用工具和技术:利用时间管理工具和技术,如日历、提醒和待办事项列表,以帮助您更好地管理时间。 6. 学会说“不”:学会拒绝不必要的事情,这有助于您更好地利用时间,并专注于最重要的任务。 7. 保持积极心态:保持积极心态,相信自己能够管理时间,并尝试不断学习和发展自己的时间管理技能。 8. 休息和放松:尝试保持工作和生活的平衡,并定期休息和放松,这有助于提高时间管理技能和提高工作效率。 ``` - Python和JavaScript编程语言的主要区别是什么? ``` 1. 语法不同:Python是一种面向对象的编程语言,它的语法比较严格,需要使用花括号来标识代码块。JavaScript是一种脚本语言,它的语法相对比较松散,需要使用花括号或缩进来标 识代码块。 2. 应用领域不同:Python广泛应用于数据分析、机器学习、人工智能等领域,由于其可读性好、语法清晰、功能强大等特点。JavaScript则主要应用于前端开发、后端开发、移动端开发 等领域。 3. 运行环境不同:Python的运行环境较为独立,可以在不同的操作系统上运行。JavaScript的运行环境相对较复杂,需要考虑浏览器和Web服务器的兼容性问题。 4. 应用场景不同:Python更适合进行科学计算、数据分析、机器学习等方面的应用,如Pandas、Numpy等库;而JavaScript则更适合进行前端开发、后端开发、移动端开发等领域的脚本编程,如React、Angular等框架。 5. 库和框架不同:Python拥有丰富的标准库和第三方库,如Django、Flask等;而JavaScript则拥有流行的Web开发框架,如React、Angular等。 ``` - AI会如何影响农业生产? ``` AI将有助于提高农业生产效率,优化作物种植和收成过程,以及提高农产品的质量和品质。 AI技术可以应用于植物监测、灌溉管理、病虫害防治、土壤分析等方面,减少人力 成本,提高生产效率,同时也可以降低农业生产中的资源浪费和环境污染。例如,AI可以监测土壤湿度、气象状况等环境因素,帮助农民决定何时种植、灌溉和施肥,从而提高农作物的产量和质量。 另外,AI还可以通过监测植物健康状况,及时发现和预防病虫害的发生,提高农作物的产量和质量。此外,AI还可以帮助农民预测未来天气、干旱和洪涝灾害,制定更有效的灾害应对计划。 总之,AI有望成为农业生产中不可或缺的技术,在未来为农业生产带来更大的改进和效益。 ``` - 翻译成英文:过去都是假的,回忆是一条没有归途的路,以往的一切春天都无法复原,即使最狂热最坚贞的爱情,归根结底也不过是一种瞬息即逝的现实,唯有孤独永恒。 ``` Response: Translation into English: The past is false, memories are a dead-end road, all previous springs are hopelessly incapable of being restored, even the most passionate and steadfast love is ultimately a fleeting reality, and only solitude is eternal ``` - 请用python编程语言实现快速排序算法的代码 ``` 快速排序(Quick Sort)是一种常用的排序算法,基本思想是通过一次排序将数组分为两个部分,其中一个部分的所有元素都比另一个小,然后再分别对这两个部分进行排序 ,直到整个数组有序。 以下是使用Python语言实现快速排序的代码: ```python def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr)//2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ``` 在这个实现中,首先判断数组的长度是否小于等于1,如果是,则直接返回该数组。否则,选择一个枢轴(pivot)元素,将数组分为三个部分:小于枢轴的左部分、等于枢轴的中间部分、大于枢轴的右部分。接着递归地对左部分和右部分进行排序,并将它们与中间部分连接起来。最后返回排好序的数组。 以下是使用上述代码对数组进行排序的示例: ```python arr = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5] sorted_arr = quick_sort(arr) print(sorted_arr) ``` ``` 输出结果为:[1, 1, 2, 3, 3, 4, 5, 5, 5, 6, 9] ``` ## 免责声明 本项目相关资源仅供学术研究之用,严禁用于商业用途。 使用涉及第三方代码的部分时,请严格遵循相应的开源协议。 模型生成的内容受模型计算、随机性和量化精度损失等因素影响,本项目不对其准确性作出保证。 对于模型输出的任何内容,本项目不承担任何法律责任,亦不对因使用相关资源和输出结果而可能产生的任何损失承担责任。
Isotonic/mt5-small-ai4privacy
Isotonic
"2023-09-14T13:39:13Z"
126
1
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "en", "fr", "de", "it", "dataset:ai4privacy/pii-masking-65k", "base_model:google/mt5-small", "base_model:quantized:google/mt5-small", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-08-31T13:26:29Z"
--- license: cc-by-4.0 base_model: google/mt5-small tags: - generated_from_trainer model-index: - name: mt5-small-ai4privacy results: [] datasets: - ai4privacy/pii-masking-65k language: - en - fr - de - it pipeline_tag: text2text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-ai4privacy This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on [ai4privacy/pii-masking-65k](https://huggingface.co/datasets/ai4privacy/pii-masking-65k) dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1 - Datasets 2.14.4 - Tokenizers 0.13.3
PlankyxD/Taxi-v3
PlankyxD
"2023-06-24T18:28:08Z"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-06-24T18:28:07Z"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.65 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="PlankyxD/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
modelscope-unofficial/damo-csanmt-en-zh-large-tfs
modelscope-unofficial
"2023-05-18T19:38:55Z"
4
0
tf-keras
[ "tf-keras", "translation", "license:apache-2.0", "region:us" ]
translation
"2023-05-17T23:29:36Z"
--- license: apache-2.0 pipeline_tag: translation --- TensorFlow saved model version of the original model: https://www.modelscope.cn/models/damo/nlp_csanmt_translation_en2zh/summary
kj42/s1_replicate_20250404_065413
kj42
"2025-04-04T06:55:24Z"
0
0
null
[ "region:us" ]
null
"2025-04-04T06:55:24Z"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
DamianBoborzi/FLUX.1-schnell_meshleet
DamianBoborzi
"2025-04-11T11:17:29Z"
39
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "dataset:DamianBoborzi/MeshFleet", "base_model:black-forest-labs/FLUX.1-schnell", "base_model:adapter:black-forest-labs/FLUX.1-schnell", "license:apache-2.0", "region:us" ]
text-to-image
"2025-04-03T14:41:39Z"
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - ai-toolkit widget: - text: >- Blue BMW M-division sports coupe with modern aggressive styling, front 3/4 view, aggressive stance, carbon fiber accents, 20-inch wheels, studio lighting, dark background. output: url: samples/1744072164954__000004000_0.jpg - text: >- White Porsche electric sports sedan continuing their performance EV legacy, low angle front view, sleek design, LED headlights, studio lighting, minimalist background. output: url: samples/1744072169783__000004000_1.jpg - text: >- Silver Range Rover premium SUV with British luxury DNA, high-angle 3/4 rear view, black panoramic roof, 22-inch wheels, studio lighting, clean background. output: url: samples/1744072174593__000004000_2.jpg - text: >- Orange McLaren mid-engine supercar with signature butterfly doors, side profile, doors open, aerodynamic silhouette, carbon fiber details, studio lighting, black floor reflection. output: url: samples/1744072179398__000004000_3.jpg - text: >- Dark blue Volvo luxury SUV featuring their Scandinavian design language, 3/4 front view, Thor's hammer headlights, chrome accents, studio lighting, gradient background. output: url: samples/1744072184200__000004000_4.jpg - text: >- Purple Dodge American muscle car with classic Mopar heritage, 3/4 rear view, wide body kit, black spoiler, studio lighting, smoke effect on wheels. output: url: samples/1744072189007__000004000_5.jpg - text: >- Midnight green Jaguar convertible sports car embodying British performance tradition, 3/4 front view, top down, chrome exhaust tips, studio lighting, white background. output: url: samples/1744072193809__000004000_6.jpg - text: >- Bronze Lexus flagship luxury coupe with Japanese craftsmanship, low angle side view, multi-spoke wheels, flowing lines, studio lighting, dark gradient background. output: url: samples/1744072198622__000004000_7.jpg - text: >- Silver Audi electric grand tourer showcasing their e-mobility vision, front view, illuminated grille, aerodynamic profile, studio lighting, minimalist setting. output: url: samples/1744072203430__000004000_8.jpg - text: >- Yellow Ford iconic American muscle car with Shelby influence, 3/4 rear view, aggressive stance, large rear wing, quad exhausts, studio lighting, clean background. output: url: samples/1744072208236__000004000_9.jpg - text: >- White Toyota modern interpretation of their legendary sports car, low angle 3/4 front view, carbon fiber accents, red brake calipers, studio lighting, dark floor reflection. output: url: samples/1744072213065__000004000_10.jpg - text: >- Copper Kia performance sedan representing their modern sports approach, overhead view, panoramic sunroof, 19-inch wheels, studio lighting, black background. output: url: samples/1744072217874__000004000_11.jpg - text: >- Ice blue Nissan high-performance coupe inspired by their GT racing heritage, 3/4 front view, wide body kit, carbon fiber splitter, red accents, studio lighting, industrial backdrop. output: url: samples/1744072222682__000004000_12.jpg - text: >- British racing green Bentley luxury grand tourer continuing their handcrafted excellence, side profile, chrome details, multi-spoke wheels, studio lighting, elegant background. output: url: samples/1744072227489__000004000_13.jpg - text: >- Matte black Cadillac full-size luxury SUV with distinctive American styling, 3/4 front view, illuminated grille, large wheels, studio lighting, minimalist setting. output: url: samples/1744072232300__000004000_14.jpg - text: >- Pearl white Hyundai electric crossover showcasing their futuristic design language, 3/4 rear view, pixel-style lights, aerodynamic design, studio lighting, light gray background. output: url: samples/1744072237098__000004000_15.jpg - text: >- Gunmetal gray Subaru rally-inspired performance sedan with boxer engine heritage, 3/4 rear view, large wing, gold wheels, studio lighting, concrete floor. output: url: samples/1744072241898__000004000_16.jpg - text: >- Burgundy Genesis luxury flagship sedan with Korean premium craftsmanship, 3/4 front view, distinctive grille, chrome accents, studio lighting, gradient dark background. output: url: samples/1744072246703__000004000_17.jpg - text: >- Nardo gray Koenigsegg Swedish hypercar with innovative engineering, side profile, active aerodynamics, center-lock wheels, studio lighting, minimal background. output: url: samples/1744072251509__000004000_18.jpg - text: >- Sapphire blue Polestar Scandinavian electric performance vehicle, 3/4 front view, Thor's hammer headlights, aerodynamic wheels, studio lighting, white background. output: url: samples/1744072256317__000004000_19.jpg - text: >- Sleek blue sports sedan, 3/4 front view, aggressive stance, LED headlights, five-spoke alloy wheels, studio lighting, gradient background. output: url: samples/1744072261151__000004000_20.jpg - text: >- White luxury convertible, side profile, chrome accents, custom rims, studio lighting, minimalist setting. output: url: samples/1744072265956__000004000_21.jpg - text: >- Racing green grand tourer, 3/4 rear view, muscular haunches, quad exhaust tips, carbon fiber diffuser, studio lighting, dark floor reflection. output: url: samples/1744072270756__000004000_22.jpg - text: >- Matte gray electric SUV, 3/4 front view, illuminated badge, aerodynamic design, panoramic glass roof, studio lighting, clean white background. output: url: samples/1744072275561__000004000_23.jpg base_model: black-forest-labs/FLUX.1-schnell license: apache-2.0 datasets: - DamianBoborzi/MeshFleet --- # FLUX.1-schnell_meshleet FLUX.1-schnell LORA trained with with renders from [MeshFleet](DamianBoborzi/MeshFleet). Either 'flux_schnell_meshfleet_lora32.safetensors' or 'flux_schnell_meshfleet_lora64.safetensors' can be loaded. Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words MeshFleet Finetune ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/DamianBoborzi/flux_schnell_meshleet_lora32/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-schnell', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('DamianBoborzi/FLUX.1-schnell_meshleet', weight_name='flux_schnell_meshleet_lora32.safetensors') # alternative: pipeline.load_lora_weights('DamianBoborzi/FLUX.1-schnell_meshleet', weight_name='flux_schnell_meshleet_lora64.safetensors') image = pipeline('Blue BMW M-division sports coupe with modern aggressive styling, front 3/4 view, aggressive stance, carbon fiber accents, 20-inch wheels, studio lighting, dark background.').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
featherless-ai-quants/nlpai-lab-KULLM3-GGUF
featherless-ai-quants
"2024-10-31T16:18:39Z"
10
0
null
[ "gguf", "text-generation", "base_model:nlpai-lab/KULLM3", "base_model:quantized:nlpai-lab/KULLM3", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2024-10-31T15:53:35Z"
--- base_model: nlpai-lab/KULLM3 pipeline_tag: text-generation quantized_by: featherless-ai-quants --- # nlpai-lab/KULLM3 GGUF Quantizations 🚀 ![Featherless AI Quants](./featherless-quants.png) *Optimized GGUF quantization files for enhanced model performance* Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee. --- ## Available Quantizations 📊 | Quantization Type | File | Size | |-------------------|------|------| | Q8_0 | [nlpai-lab-KULLM3-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/nlpai-lab-KULLM3-GGUF/blob/main/nlpai-lab-KULLM3-Q8_0.gguf) | 10875.85 MB | | Q4_K_S | [nlpai-lab-KULLM3-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/nlpai-lab-KULLM3-GGUF/blob/main/nlpai-lab-KULLM3-Q4_K_S.gguf) | 5835.08 MB | | Q2_K | [nlpai-lab-KULLM3-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/nlpai-lab-KULLM3-GGUF/blob/main/nlpai-lab-KULLM3-Q2_K.gguf) | 3817.78 MB | | Q6_K | [nlpai-lab-KULLM3-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/nlpai-lab-KULLM3-GGUF/blob/main/nlpai-lab-KULLM3-Q6_K.gguf) | 8397.30 MB | | Q3_K_M | [nlpai-lab-KULLM3-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/nlpai-lab-KULLM3-GGUF/blob/main/nlpai-lab-KULLM3-Q3_K_M.gguf) | 4954.98 MB | | Q3_K_S | [nlpai-lab-KULLM3-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/nlpai-lab-KULLM3-GGUF/blob/main/nlpai-lab-KULLM3-Q3_K_S.gguf) | 4448.48 MB | | Q3_K_L | [nlpai-lab-KULLM3-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/nlpai-lab-KULLM3-GGUF/blob/main/nlpai-lab-KULLM3-Q3_K_L.gguf) | 5388.98 MB | | Q4_K_M | [nlpai-lab-KULLM3-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/nlpai-lab-KULLM3-GGUF/blob/main/nlpai-lab-KULLM3-Q4_K_M.gguf) | 6162.33 MB | | Q5_K_S | [nlpai-lab-KULLM3-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/nlpai-lab-KULLM3-GGUF/blob/main/nlpai-lab-KULLM3-Q5_K_S.gguf) | 7054.70 MB | | Q5_K_M | [nlpai-lab-KULLM3-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/nlpai-lab-KULLM3-GGUF/blob/main/nlpai-lab-KULLM3-Q5_K_M.gguf) | 7245.95 MB | | IQ4_XS | [nlpai-lab-KULLM3-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/nlpai-lab-KULLM3-GGUF/blob/main/nlpai-lab-KULLM3-IQ4_XS.gguf) | 5557.67 MB | --- ## ⚡ Powered by [Featherless AI](https://featherless.ai) ### Key Features - 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly - 🛠️ **Zero Infrastructure** - No server setup or maintenance required - 📚 **Vast Compatibility** - Support for 2400+ models and counting - 💎 **Affordable Pricing** - Starting at just $10/month --- **Links:** [Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
esmarquez17/hate-social-network
esmarquez17
"2023-11-30T01:22:28Z"
3
0
transformers
[ "transformers", "tf", "roberta", "text-classification", "generated_from_keras_callback", "base_model:esmarquez17/fine-tunning-roberta-bne-hate-offensive", "base_model:finetune:esmarquez17/fine-tunning-roberta-bne-hate-offensive", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-11-30T01:14:35Z"
--- license: apache-2.0 base_model: esmarquez17/fine-tunning-roberta-bne-hate-offensive tags: - generated_from_keras_callback model-index: - name: hate-social-network results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # hate-social-network This model is a fine-tuned version of [esmarquez17/fine-tunning-roberta-bne-hate-offensive](https://huggingface.co/esmarquez17/fine-tunning-roberta-bne-hate-offensive) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 3050, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results ### Framework versions - Transformers 4.35.2 - TensorFlow 2.14.0 - Datasets 2.15.0 - Tokenizers 0.15.0
iimaginary/gemma2-unsloth-merged
iimaginary
"2024-05-05T14:59:15Z"
137
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/gemma-2b-it-bnb-4bit", "base_model:finetune:unsloth/gemma-2b-it-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-05T14:54:40Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl - sft base_model: unsloth/gemma-2b-it-bnb-4bit --- # Uploaded model - **Developed by:** iimaginary - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
FinalIroha/Madobe_Family_SoVITS4.0_Model
FinalIroha
"2023-05-31T06:39:16Z"
5
1
transformers
[ "transformers", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
"2023-05-15T12:13:23Z"
--- license: cc-by-nc-sa-4.0 --- # SoVITS 4.0窗边家族模型 <!-- Provide a quick summary of what the model is/does. --> 此模型由[SoVITS4.0](https://github.com/svc-develop-team/so-vits-svc/tree/4.0)生成。 ## 模型人物名 <!-- Provide a quick summary of what the model is/does. --> - **Nanami Madobe:** 窓辺ななみ/窗边奈奈美(Win7娘)CV:水树奈奈 - **Claudia Madobe:** クラウディア・窓辺/克劳蒂亚‧窗边(Azure娘)CV:喜多村英梨 - **Ai Madobe:** 窓辺あい/窗边爱(Win8娘)CV:田村奈央 - **Yuu Madobe:** 窓辺ゆう/窗边优(Win8娘)CV:西明日香 - **Toko Madobe:** 窓辺とおこ/窗边瞳子(Win10娘)CV:野中蓝
Hemanta14/asmttshemanta
Hemanta14
"2024-11-01T15:14:54Z"
33
0
transformers
[ "transformers", "safetensors", "vits", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-11-01T15:14:38Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PaddlePaddle/ernie-layoutx-base-uncased
PaddlePaddle
"2023-01-06T07:58:48Z"
0
13
paddlenlp
[ "paddlenlp", "paddlepaddle", "ernie_layout", "en", "zh", "arxiv:2210.06155", "license:apache-2.0", "region:us" ]
null
"2023-01-06T07:45:21Z"
--- library_name: paddlenlp license: apache-2.0 language: - en - zh --- [![paddlenlp-banner](https://user-images.githubusercontent.com/1371212/175816733-8ec25eb0-9af3-4380-9218-27c154518258.png)](https://github.com/PaddlePaddle/PaddleNLP) # PaddlePaddle/ernie-layoutx-base-uncased ## Introduction Recent years have witnessed the rise and success of pre-training techniques in visually-rich document understanding. However, most existing methods lack the systematic mining and utilization of layout-centered knowledge, leading to sub-optimal performances. In this paper, we propose ERNIE-Layout, a novel document pre-training solution with layout knowledge enhancement in the whole workflow, to learn better representations that combine the features from text, layout, and image. Specifically, we first rearrange input sequences in the serialization stage, and then present a correlative pre-training task, reading order prediction, to learn the proper reading order of documents. To improve the layout awareness of the model, we integrate a spatial-aware disentangled attention into the multi-modal transformer and a replaced regions prediction task into the pre-training phase. Experimental results show that ERNIE-Layout achieves superior performance on various downstream tasks, setting new state-of-the-art on key information extraction, document image classification, and document question answering datasets. More detail: https://arxiv.org/abs/2210.06155 ## Available Models - ernie-layoutx-base-uncased ## How to Use? Click on the *Use in paddlenlp* button on the top right! ## Citation Info ```text @article{ernie2.0, title = {ERNIE-Layout: Layout Knowledge Enhanced Pre-training for Visually-rich Document Understanding}, author = {Peng, Qiming and Pan, Yinxu and Wang, Wenjin and Luo, Bin and Zhang, Zhenyu and Huang, Zhengjie and Hu, Teng and Yin, Weichong and Chen, Yongfeng and Zhang, Yin and Feng, Shikun and Sun, Yu and Tian, Hao and Wu, Hua and Wang, Haifeng}, journal={arXiv preprint arXiv:2210.06155}, year = {2022}, } ```
limbospoon/lunarlander-ppo
limbospoon
"2023-04-16T13:19:07Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2023-04-15T17:36:26Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 277.10 +/- 19.33 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
NickThe1/ppo-PyramidsTraining
NickThe1
"2023-06-01T05:00:39Z"
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
"2023-06-01T05:00:34Z"
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: NickThe1/ppo-PyramidsTraining 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf
RichardErkhov
"2025-02-18T17:14:10Z"
0
0
null
[ "gguf", "arxiv:2306.01708", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-18T16:51:35Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties - GGUF - Model creator: https://huggingface.co/ank028/ - Original model: https://huggingface.co/ank028/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q2_K.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q2_K.gguf) | Q2_K | 0.54GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.IQ3_XS.gguf) | IQ3_XS | 0.58GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.IQ3_S.gguf) | IQ3_S | 0.6GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q3_K_S.gguf) | Q3_K_S | 0.6GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.IQ3_M.gguf) | IQ3_M | 0.61GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q3_K.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q3_K.gguf) | Q3_K | 0.64GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q3_K_M.gguf) | Q3_K_M | 0.64GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q3_K_L.gguf) | Q3_K_L | 0.68GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.IQ4_XS.gguf) | IQ4_XS | 0.7GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q4_0.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q4_0.gguf) | Q4_0 | 0.72GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.IQ4_NL.gguf) | IQ4_NL | 0.72GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q4_K_S.gguf) | Q4_K_S | 0.72GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q4_K.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q4_K.gguf) | Q4_K | 0.75GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q4_K_M.gguf) | Q4_K_M | 0.75GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q4_1.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q4_1.gguf) | Q4_1 | 0.77GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q5_0.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q5_0.gguf) | Q5_0 | 0.83GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q5_K_S.gguf) | Q5_K_S | 0.83GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q5_K.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q5_K.gguf) | Q5_K | 0.85GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q5_K_M.gguf) | Q5_K_M | 0.85GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q5_1.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q5_1.gguf) | Q5_1 | 0.89GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q6_K.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q6_K.gguf) | Q6_K | 0.95GB | | [Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q8_0.gguf](https://huggingface.co/RichardErkhov/ank028_-_Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties-gguf/blob/main/Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties.Q8_0.gguf) | Q8_0 | 1.23GB | Original model description: --- base_model: - ank028/Llama-3.2-1B-Instruct-gsm8k - meta-llama/Llama-3.2-1B-Instruct - autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1 library_name: transformers tags: - mergekit - merge --- # g_l This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [ank028/Llama-3.2-1B-Instruct-gsm8k](https://huggingface.co/ank028/Llama-3.2-1B-Instruct-gsm8k) * [autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1](https://huggingface.co/autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ank028/Llama-3.2-1B-Instruct-gsm8k parameters: density: 0.5 # density gradient weight: 1.0 - model: autoprogrammer/Llama-3.2-1B-Instruct-MGSM8K-sft1 parameters: density: 0.5 weight: 0.5 # weight gradient merge_method: ties base_model: meta-llama/Llama-3.2-1B-Instruct parameters: normalize: true int8_mask: false dtype: float16 name: Llama-3.2-1B-Instruct-gsm8k-MGSM8K-sft1-ties ```
MoMonir/aya-23-8B-GGUF
MoMonir
"2024-05-25T22:10:35Z"
5
0
transformers
[ "transformers", "gguf", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "el", "fa", "pl", "id", "cs", "he", "hi", "nl", "ro", "ru", "tr", "uk", "vi", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-05-25T21:40:03Z"
--- library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi license: cc-by-nc-4.0 --- # MoMonir/aya-23-8B-GGUF This model was converted to GGUF format from [`CohereForAI/aya-23-8B`](https://huggingface.co/CohereForAI/aya-23-8B) using llama.cpp. Refer to the [original model card](https://huggingface.co/CohereForAI/aya-23-8B) for more details on the model. <!-- README_GGUF.md-about-gguf start --> ### About GGUF ([TheBloke](https://huggingface.co/TheBloke) Description) GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo MoMonir/aya-23-8B-GGUF --model aya-23-8B.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo MoMonir/aya-23-8B-GGUF --model aya-23-8B.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m aya-23-8B.Q4_K_M.gguf -n 128 ```
EdsonMorro/analise-sentimento
EdsonMorro
"2025-02-13T18:21:55Z"
0
0
null
[ "safetensors", "bert", "license:mit", "region:us" ]
null
"2025-02-07T14:22:08Z"
--- license: mit --- # Análise de Sentimento Este repositório contém um modelo de análise de sentimento baseado em um modelo pré-treinado, desenvolvido para classificar textos em sentimentos positivos, negativos ou neutros. Atualmente, ele é utilizado para identificar sentimentos em redações, auxiliando na compreensão das emoções e intenções expressas pelos autores. ## O que é análise de sentimento? A análise de sentimento é uma técnica de processamento de linguagem natural amplamente utilizada para extrair e interpretar opiniões subjetivas em textos, sendo aplicada em diversos contextos, como avaliações de produtos, feedbacks de clientes e monitoramento de tendências. O processo envolve três etapas principais: pré-processamento do texto, extração de características e classificação por meio de modelos de machine learning ou deep learning. ## Características do Modelo - **Arquitetura**: Modelo baseado em **BERT**, otimizado para análise de sentimento. - **Dados de Treinamento**: Treinado com um conjunto de dados que inclui **notícias financeiras**, permitindo uma melhor compreensão de textos nesse domínio. - **Linguagem**: Principalmente em **Português (PT-BR)**. # Utilizando esse modelo Para utilizar o modelo de análise de sentimento, você pode usar o seguinte código: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline MODEL_NAME = "sptech-ai/sptech.template.ai.model.sentiment" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME) sentiment_pipeline = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) texto = "O mercado financeiro teve um grande crescimento este ano." resultado = sentiment_pipeline(texto) print(resultado) ```
kadirnar/RRDB_ESRGAN_x4
kadirnar
"2022-12-22T23:05:06Z"
0
2
null
[ "Super-Resolution", "computer-vision", "ESRGAN", "gan", "arxiv:2107.10833", "license:apache-2.0", "region:us" ]
null
"2022-12-22T22:42:40Z"
--- license: apache-2.0 tags: - Super-Resolution - computer-vision - ESRGAN - gan --- ### Model Description [ESRGAN](https://arxiv.org/abs/2107.10833): ECCV18 Workshops - Enhanced SRGAN. Champion PIRM Challenge on Perceptual Super-Resolution [Paper Repo](https://github.com/xinntao/ESRGAN): Implementation of paper. ### Installation ``` pip install bsrgan ``` ### BSRGAN Usage ```python from bsrgan import BSRGAN model = BSRGAN(weights='kadirnar/RRDB_ESRGAN_x4', device='cuda:0', hf_model=True) model.save = True pred = model.predict(img_path='data/image/test.png') ``` ### BibTeX Entry and Citation Info ``` @inproceedings{zhang2021designing, title={Designing a Practical Degradation Model for Deep Blind Image Super-Resolution}, author={Zhang, Kai and Liang, Jingyun and Van Gool, Luc and Timofte, Radu}, booktitle={IEEE International Conference on Computer Vision}, pages={4791--4800}, year={2021} } ``` ``` @InProceedings{wang2018esrgan, author = {Wang, Xintao and Yu, Ke and Wu, Shixiang and Gu, Jinjin and Liu, Yihao and Dong, Chao and Qiao, Yu and Loy, Chen Change}, title = {ESRGAN: Enhanced super-resolution generative adversarial networks}, booktitle = {The European Conference on Computer Vision Workshops (ECCVW)}, month = {September}, year = {2018} } ```
CarpetCleaningPlanoTX/CarpetStainRemovalPlanoTX
CarpetCleaningPlanoTX
"2022-12-11T07:29:56Z"
0
0
null
[ "license:other", "region:us" ]
null
"2022-12-11T07:29:29Z"
--- license: other --- Carpet Stain Removal Plano TX https://carpetcleaningplanotx.com/carpet-stain-removal.html ‪(469) 444-1903‬ Carpet Cleaning Plano in Texas is the company of choice for the majority of customers when it comes to stain removal.We have the best-trained staff and professional technology.We will get rid of even the worst stain.That is if it comes from your upholstery, fabrics, curtains, and carpets.Try us out today, and you'll see why the majority of people prefer us to everyone else.
lyhourt/whisper-small-hu-2500
lyhourt
"2024-05-12T19:04:04Z"
90
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:mozilla-foundation/common_voice_17_0", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-05-12T14:29:17Z"
--- license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_17_0 metrics: - wer model-index: - name: Whisper Small Hu 1500 - Lyhourt TE results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 17.0 type: mozilla-foundation/common_voice_17_0 metrics: - name: Wer type: wer value: 23.312927361901696 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Hu 1500 - Lyhourt TE This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2408 - Wer: 23.3129 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 250 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2358 | 1.0 | 250 | 0.2408 | 23.3129 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
mradermacher/llama-3-8b-quizgen-instruct-GGUF
mradermacher
"2024-11-04T20:04:35Z"
12
0
transformers
[ "transformers", "gguf", "en", "base_model:jazzysnake01/llama-3-8b-quizgen-instruct", "base_model:quantized:jazzysnake01/llama-3-8b-quizgen-instruct", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-11-04T19:49:04Z"
--- base_model: jazzysnake01/llama-3-8b-quizgen-instruct language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/jazzysnake01/llama-3-8b-quizgen-instruct <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-quizgen-instruct-GGUF/resolve/main/llama-3-8b-quizgen-instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
gfalcao/smkfr24jun
gfalcao
"2023-06-24T11:58:42Z"
34
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-06-24T11:47:12Z"
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### smkfr24Jun Dreambooth model trained by gfalcao with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf
RichardErkhov
"2025-03-02T08:49:30Z"
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-03-02T08:46:42Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SmolLM2-FT-MyDataset - GGUF - Model creator: https://huggingface.co/pankajmlai/ - Original model: https://huggingface.co/pankajmlai/SmolLM2-FT-MyDataset/ | Name | Quant method | Size | | ---- | ---- | ---- | | [SmolLM2-FT-MyDataset.Q2_K.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q2_K.gguf) | Q2_K | 0.08GB | | [SmolLM2-FT-MyDataset.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ3_XS.gguf) | IQ3_XS | 0.08GB | | [SmolLM2-FT-MyDataset.IQ3_S.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ3_S.gguf) | IQ3_S | 0.08GB | | [SmolLM2-FT-MyDataset.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q3_K_S.gguf) | Q3_K_S | 0.08GB | | [SmolLM2-FT-MyDataset.IQ3_M.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ3_M.gguf) | IQ3_M | 0.08GB | | [SmolLM2-FT-MyDataset.Q3_K.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q3_K.gguf) | Q3_K | 0.09GB | | [SmolLM2-FT-MyDataset.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q3_K_M.gguf) | Q3_K_M | 0.09GB | | [SmolLM2-FT-MyDataset.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q3_K_L.gguf) | Q3_K_L | 0.09GB | | [SmolLM2-FT-MyDataset.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ4_XS.gguf) | IQ4_XS | 0.09GB | | [SmolLM2-FT-MyDataset.Q4_0.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_0.gguf) | Q4_0 | 0.09GB | | [SmolLM2-FT-MyDataset.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.IQ4_NL.gguf) | IQ4_NL | 0.09GB | | [SmolLM2-FT-MyDataset.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_K_S.gguf) | Q4_K_S | 0.1GB | | [SmolLM2-FT-MyDataset.Q4_K.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_K.gguf) | Q4_K | 0.1GB | | [SmolLM2-FT-MyDataset.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_K_M.gguf) | Q4_K_M | 0.1GB | | [SmolLM2-FT-MyDataset.Q4_1.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q4_1.gguf) | Q4_1 | 0.09GB | | [SmolLM2-FT-MyDataset.Q5_0.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_0.gguf) | Q5_0 | 0.1GB | | [SmolLM2-FT-MyDataset.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_K_S.gguf) | Q5_K_S | 0.1GB | | [SmolLM2-FT-MyDataset.Q5_K.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_K.gguf) | Q5_K | 0.1GB | | [SmolLM2-FT-MyDataset.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_K_M.gguf) | Q5_K_M | 0.1GB | | [SmolLM2-FT-MyDataset.Q5_1.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q5_1.gguf) | Q5_1 | 0.1GB | | [SmolLM2-FT-MyDataset.Q6_K.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q6_K.gguf) | Q6_K | 0.13GB | | [SmolLM2-FT-MyDataset.Q8_0.gguf](https://huggingface.co/RichardErkhov/pankajmlai_-_SmolLM2-FT-MyDataset-gguf/blob/main/SmolLM2-FT-MyDataset.Q8_0.gguf) | Q8_0 | 0.13GB | Original model description: --- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-FT-MyDataset tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-FT-MyDataset This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="pankajmlai/SmolLM2-FT-MyDataset", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mountaintree-none/huggingface/runs/hw7yucl1) This model was trained with SFT. ### Framework versions - TRL: 0.13.0 - Transformers: 4.47.1 - Pytorch: 2.5.1+cu121 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
salsina/llama_full
salsina
"2025-02-12T09:09:31Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-02-12T09:08:57Z"
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** salsina - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
eliftansusunar/elif
eliftansusunar
"2023-12-11T18:19:17Z"
5
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-12-11T18:14:35Z"
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### elif Dreambooth model trained by eliftansusunar with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/eliftansusunar/elif/resolve/main/sample_images/elif_(5).jpg) ![1](https://huggingface.co/eliftansusunar/elif/resolve/main/sample_images/elif.jpg) ![2](https://huggingface.co/eliftansusunar/elif/resolve/main/sample_images/elif_(1).jpg) ![3](https://huggingface.co/eliftansusunar/elif/resolve/main/sample_images/elif_(4).jpg) ![4](https://huggingface.co/eliftansusunar/elif/resolve/main/sample_images/elif_(3).jpg) ![5](https://huggingface.co/eliftansusunar/elif/resolve/main/sample_images/elif_(2).jpg)
StepLaw/StepLaw-N_1.0B-D_7.0B-LR9.766e-04-BS524288
StepLaw
"2025-04-04T11:32:22Z"
0
0
transformers
[ "transformers", "safetensors", "step1", "text-generation", "StepLaw", "causal-lm", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-04T11:29:03Z"
--- license: apache-2.0 tags: - StepLaw - causal-lm language: - en library_name: transformers pipeline_tag: text-generation model-index: - name: step2v2_0618_h2048_ffnh8192_numh16_numl16_lr9.766e-04_bs256_ti15258_mlr1e-5 results: [] --- # Wandb Model Name: step2v2_0618_h2048_ffnh8192_numh16_numl16_lr9.766e-04_bs256_ti15258_mlr1e-5 This model is part of the [StepLaw-N_1.0B-D_7.0B](https://huggingface.co/collections/StepLaw/StepLaw-N_1.0B-D_7.0B) collection. ## Model Specifications ### Architecture - **Hidden size (H)**: 2048 - **Feed-forward network size (FFN)**: 8192 - **Attention heads**: 16 - **Layers**: 16 - **Parameter count**: 1.1BM ### Training Parameters - **Learning rate (lr)**: 9.766e-04 - **Batch size (bs)**: 256 - **Training iterations**: 15258 - **Training tokens (D)**: 8.0B ## Model Description StepLaw models are trained with various hyperparameter settings to enable research on scaling laws and hyperparameter optimization. This specific model was trained with learning rate 9.766e-04 and batch size 256 for 15258 iterations, using a total of 8.0B training tokens. ## Usage Example ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "StepLaw/StepLaw-N_1.0B-D_7.0B-LR9.766e-04-BS524288" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True) # Generate text inputs = tokenizer("A long time ago in a galaxy far, far away", return_tensors="pt") outputs = model.generate(**inputs, max_length=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```## Part of StepLaw Project StepLaw is an initiative to provide thousands of models for optimal hyperparameter research. Visit [StepLaw Project](https://step-law.github.io/) for more information.
MayBashendy/ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k19_task5_organization
MayBashendy
"2025-01-12T20:55:53Z"
9
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-01-12T20:48:33Z"
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k19_task5_organization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits7_FineTuningAraBERT_run1_AugV5_k19_task5_organization This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8857 - Qwk: 0.4809 - Mse: 0.8857 - Rmse: 0.9411 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 0.0417 | 2 | 4.3322 | -0.0048 | 4.3322 | 2.0814 | | No log | 0.0833 | 4 | 2.5281 | -0.0340 | 2.5281 | 1.5900 | | No log | 0.125 | 6 | 1.4897 | 0.0185 | 1.4897 | 1.2205 | | No log | 0.1667 | 8 | 1.1472 | 0.2343 | 1.1472 | 1.0711 | | No log | 0.2083 | 10 | 1.4258 | 0.0343 | 1.4258 | 1.1941 | | No log | 0.25 | 12 | 1.4676 | 0.0568 | 1.4676 | 1.2114 | | No log | 0.2917 | 14 | 1.1440 | 0.1848 | 1.1440 | 1.0696 | | No log | 0.3333 | 16 | 1.0720 | 0.1313 | 1.0720 | 1.0354 | | No log | 0.375 | 18 | 1.0918 | 0.1864 | 1.0918 | 1.0449 | | No log | 0.4167 | 20 | 1.0645 | 0.0422 | 1.0645 | 1.0318 | | No log | 0.4583 | 22 | 1.1121 | 0.1076 | 1.1121 | 1.0546 | | No log | 0.5 | 24 | 1.1023 | 0.0824 | 1.1023 | 1.0499 | | No log | 0.5417 | 26 | 1.1543 | 0.1203 | 1.1543 | 1.0744 | | No log | 0.5833 | 28 | 1.0460 | 0.1263 | 1.0460 | 1.0228 | | No log | 0.625 | 30 | 0.9569 | 0.2865 | 0.9569 | 0.9782 | | No log | 0.6667 | 32 | 0.9375 | 0.2865 | 0.9375 | 0.9682 | | No log | 0.7083 | 34 | 0.9329 | 0.2671 | 0.9329 | 0.9659 | | No log | 0.75 | 36 | 1.0419 | 0.1881 | 1.0419 | 1.0207 | | No log | 0.7917 | 38 | 1.4336 | -0.0270 | 1.4336 | 1.1973 | | No log | 0.8333 | 40 | 1.3413 | -0.0112 | 1.3413 | 1.1581 | | No log | 0.875 | 42 | 0.8825 | 0.3221 | 0.8825 | 0.9394 | | No log | 0.9167 | 44 | 0.9190 | 0.4406 | 0.9190 | 0.9587 | | No log | 0.9583 | 46 | 0.9979 | 0.2956 | 0.9979 | 0.9989 | | No log | 1.0 | 48 | 0.9045 | 0.4357 | 0.9045 | 0.9510 | | No log | 1.0417 | 50 | 0.8200 | 0.4210 | 0.8200 | 0.9055 | | No log | 1.0833 | 52 | 0.8961 | 0.3815 | 0.8961 | 0.9466 | | No log | 1.125 | 54 | 0.9969 | 0.2956 | 0.9969 | 0.9984 | | No log | 1.1667 | 56 | 1.1032 | 0.2038 | 1.1032 | 1.0503 | | No log | 1.2083 | 58 | 1.1889 | 0.1426 | 1.1889 | 1.0904 | | No log | 1.25 | 60 | 0.9552 | 0.3958 | 0.9552 | 0.9773 | | No log | 1.2917 | 62 | 0.8357 | 0.5472 | 0.8357 | 0.9142 | | No log | 1.3333 | 64 | 0.8712 | 0.5062 | 0.8712 | 0.9334 | | No log | 1.375 | 66 | 0.8265 | 0.5195 | 0.8265 | 0.9091 | | No log | 1.4167 | 68 | 0.7830 | 0.5107 | 0.7830 | 0.8849 | | No log | 1.4583 | 70 | 0.7917 | 0.5528 | 0.7917 | 0.8898 | | No log | 1.5 | 72 | 0.8603 | 0.5279 | 0.8603 | 0.9275 | | No log | 1.5417 | 74 | 0.8340 | 0.5183 | 0.8340 | 0.9132 | | No log | 1.5833 | 76 | 0.7311 | 0.5329 | 0.7311 | 0.8550 | | No log | 1.625 | 78 | 0.7282 | 0.5748 | 0.7282 | 0.8533 | | No log | 1.6667 | 80 | 0.7276 | 0.5650 | 0.7276 | 0.8530 | | No log | 1.7083 | 82 | 0.7115 | 0.5797 | 0.7115 | 0.8435 | | No log | 1.75 | 84 | 0.8166 | 0.4921 | 0.8166 | 0.9037 | | No log | 1.7917 | 86 | 0.7273 | 0.5654 | 0.7273 | 0.8528 | | No log | 1.8333 | 88 | 0.7993 | 0.5339 | 0.7993 | 0.8941 | | No log | 1.875 | 90 | 1.4456 | 0.3099 | 1.4456 | 1.2023 | | No log | 1.9167 | 92 | 1.4957 | 0.3138 | 1.4957 | 1.2230 | | No log | 1.9583 | 94 | 1.1015 | 0.3539 | 1.1015 | 1.0495 | | No log | 2.0 | 96 | 0.7592 | 0.4838 | 0.7592 | 0.8713 | | No log | 2.0417 | 98 | 0.8233 | 0.4962 | 0.8233 | 0.9073 | | No log | 2.0833 | 100 | 1.0542 | 0.3040 | 1.0542 | 1.0268 | | No log | 2.125 | 102 | 1.0384 | 0.3424 | 1.0384 | 1.0190 | | No log | 2.1667 | 104 | 0.8155 | 0.5245 | 0.8155 | 0.9031 | | No log | 2.2083 | 106 | 0.7862 | 0.5575 | 0.7862 | 0.8867 | | No log | 2.25 | 108 | 0.8147 | 0.4810 | 0.8147 | 0.9026 | | No log | 2.2917 | 110 | 0.8344 | 0.3800 | 0.8344 | 0.9134 | | No log | 2.3333 | 112 | 0.8142 | 0.5342 | 0.8142 | 0.9023 | | No log | 2.375 | 114 | 0.9228 | 0.4482 | 0.9228 | 0.9606 | | No log | 2.4167 | 116 | 0.9353 | 0.4577 | 0.9353 | 0.9671 | | No log | 2.4583 | 118 | 0.8401 | 0.4966 | 0.8401 | 0.9166 | | No log | 2.5 | 120 | 0.7965 | 0.5450 | 0.7965 | 0.8924 | | No log | 2.5417 | 122 | 0.8057 | 0.4996 | 0.8057 | 0.8976 | | No log | 2.5833 | 124 | 0.7873 | 0.5124 | 0.7873 | 0.8873 | | No log | 2.625 | 126 | 0.8080 | 0.4964 | 0.8080 | 0.8989 | | No log | 2.6667 | 128 | 0.8035 | 0.4969 | 0.8035 | 0.8964 | | No log | 2.7083 | 130 | 0.7780 | 0.5451 | 0.7780 | 0.8821 | | No log | 2.75 | 132 | 0.8098 | 0.5300 | 0.8098 | 0.8999 | | No log | 2.7917 | 134 | 0.7938 | 0.5038 | 0.7938 | 0.8909 | | No log | 2.8333 | 136 | 0.8025 | 0.5261 | 0.8025 | 0.8958 | | No log | 2.875 | 138 | 0.8010 | 0.4903 | 0.8010 | 0.8950 | | No log | 2.9167 | 140 | 0.8132 | 0.5463 | 0.8132 | 0.9018 | | No log | 2.9583 | 142 | 0.9190 | 0.4521 | 0.9190 | 0.9587 | | No log | 3.0 | 144 | 1.0141 | 0.3959 | 1.0141 | 1.0070 | | No log | 3.0417 | 146 | 0.9964 | 0.4150 | 0.9964 | 0.9982 | | No log | 3.0833 | 148 | 0.8910 | 0.3993 | 0.8910 | 0.9439 | | No log | 3.125 | 150 | 0.8635 | 0.4910 | 0.8635 | 0.9292 | | No log | 3.1667 | 152 | 0.8781 | 0.4956 | 0.8781 | 0.9371 | | No log | 3.2083 | 154 | 0.8687 | 0.5002 | 0.8687 | 0.9320 | | No log | 3.25 | 156 | 0.9479 | 0.4244 | 0.9479 | 0.9736 | | No log | 3.2917 | 158 | 0.9392 | 0.4098 | 0.9392 | 0.9691 | | No log | 3.3333 | 160 | 0.8499 | 0.5135 | 0.8499 | 0.9219 | | No log | 3.375 | 162 | 0.8727 | 0.3908 | 0.8727 | 0.9342 | | No log | 3.4167 | 164 | 0.8658 | 0.3908 | 0.8658 | 0.9305 | | No log | 3.4583 | 166 | 0.8206 | 0.4676 | 0.8206 | 0.9059 | | No log | 3.5 | 168 | 0.8539 | 0.4599 | 0.8539 | 0.9241 | | No log | 3.5417 | 170 | 0.9221 | 0.4695 | 0.9221 | 0.9603 | | No log | 3.5833 | 172 | 0.8504 | 0.4510 | 0.8504 | 0.9222 | | No log | 3.625 | 174 | 0.7662 | 0.5621 | 0.7662 | 0.8753 | | No log | 3.6667 | 176 | 0.8112 | 0.5098 | 0.8112 | 0.9007 | | No log | 3.7083 | 178 | 0.7759 | 0.5253 | 0.7759 | 0.8809 | | No log | 3.75 | 180 | 0.7376 | 0.5752 | 0.7376 | 0.8589 | | No log | 3.7917 | 182 | 0.7213 | 0.5822 | 0.7213 | 0.8493 | | No log | 3.8333 | 184 | 0.7138 | 0.5572 | 0.7138 | 0.8449 | | No log | 3.875 | 186 | 0.6895 | 0.5247 | 0.6895 | 0.8303 | | No log | 3.9167 | 188 | 0.6717 | 0.5747 | 0.6717 | 0.8195 | | No log | 3.9583 | 190 | 0.6500 | 0.5871 | 0.6500 | 0.8062 | | No log | 4.0 | 192 | 0.6237 | 0.6076 | 0.6237 | 0.7898 | | No log | 4.0417 | 194 | 0.6414 | 0.6179 | 0.6414 | 0.8009 | | No log | 4.0833 | 196 | 0.6532 | 0.6512 | 0.6532 | 0.8082 | | No log | 4.125 | 198 | 0.6099 | 0.5742 | 0.6099 | 0.7810 | | No log | 4.1667 | 200 | 0.6747 | 0.6082 | 0.6747 | 0.8214 | | No log | 4.2083 | 202 | 0.7341 | 0.5857 | 0.7341 | 0.8568 | | No log | 4.25 | 204 | 0.6794 | 0.5548 | 0.6794 | 0.8243 | | No log | 4.2917 | 206 | 0.6475 | 0.5577 | 0.6475 | 0.8047 | | No log | 4.3333 | 208 | 0.7071 | 0.5837 | 0.7071 | 0.8409 | | No log | 4.375 | 210 | 0.7083 | 0.5746 | 0.7083 | 0.8416 | | No log | 4.4167 | 212 | 0.6769 | 0.5391 | 0.6769 | 0.8227 | | No log | 4.4583 | 214 | 0.6887 | 0.5214 | 0.6887 | 0.8299 | | No log | 4.5 | 216 | 0.8266 | 0.5271 | 0.8266 | 0.9092 | | No log | 4.5417 | 218 | 0.8963 | 0.5070 | 0.8963 | 0.9467 | | No log | 4.5833 | 220 | 0.7722 | 0.5345 | 0.7722 | 0.8788 | | No log | 4.625 | 222 | 0.6847 | 0.5396 | 0.6847 | 0.8275 | | No log | 4.6667 | 224 | 0.7353 | 0.5585 | 0.7353 | 0.8575 | | No log | 4.7083 | 226 | 0.7818 | 0.4560 | 0.7818 | 0.8842 | | No log | 4.75 | 228 | 0.7743 | 0.3882 | 0.7743 | 0.8799 | | No log | 4.7917 | 230 | 0.7755 | 0.4373 | 0.7755 | 0.8806 | | No log | 4.8333 | 232 | 0.7595 | 0.4090 | 0.7595 | 0.8715 | | No log | 4.875 | 234 | 0.7377 | 0.4124 | 0.7377 | 0.8589 | | No log | 4.9167 | 236 | 0.7292 | 0.5025 | 0.7292 | 0.8539 | | No log | 4.9583 | 238 | 0.7676 | 0.4973 | 0.7676 | 0.8761 | | No log | 5.0 | 240 | 0.8075 | 0.4952 | 0.8075 | 0.8986 | | No log | 5.0417 | 242 | 0.8578 | 0.5458 | 0.8578 | 0.9262 | | No log | 5.0833 | 244 | 0.7872 | 0.4597 | 0.7872 | 0.8872 | | No log | 5.125 | 246 | 0.7191 | 0.5734 | 0.7191 | 0.8480 | | No log | 5.1667 | 248 | 0.7071 | 0.5171 | 0.7071 | 0.8409 | | No log | 5.2083 | 250 | 0.8157 | 0.4578 | 0.8157 | 0.9032 | | No log | 5.25 | 252 | 0.9052 | 0.4280 | 0.9052 | 0.9514 | | No log | 5.2917 | 254 | 0.8428 | 0.4489 | 0.8428 | 0.9180 | | No log | 5.3333 | 256 | 0.7293 | 0.5329 | 0.7293 | 0.8540 | | No log | 5.375 | 258 | 0.7361 | 0.5232 | 0.7361 | 0.8580 | | No log | 5.4167 | 260 | 0.8102 | 0.4697 | 0.8102 | 0.9001 | | No log | 5.4583 | 262 | 0.7904 | 0.5088 | 0.7904 | 0.8890 | | No log | 5.5 | 264 | 0.7732 | 0.5304 | 0.7732 | 0.8793 | | No log | 5.5417 | 266 | 0.7664 | 0.5303 | 0.7664 | 0.8754 | | No log | 5.5833 | 268 | 0.7687 | 0.4835 | 0.7687 | 0.8768 | | No log | 5.625 | 270 | 0.8093 | 0.4513 | 0.8093 | 0.8996 | | No log | 5.6667 | 272 | 0.7795 | 0.4069 | 0.7795 | 0.8829 | | No log | 5.7083 | 274 | 0.8003 | 0.4714 | 0.8003 | 0.8946 | | No log | 5.75 | 276 | 0.8880 | 0.4310 | 0.8880 | 0.9423 | | No log | 5.7917 | 278 | 0.8889 | 0.4318 | 0.8889 | 0.9428 | | No log | 5.8333 | 280 | 0.8459 | 0.4439 | 0.8459 | 0.9197 | | No log | 5.875 | 282 | 0.8710 | 0.4310 | 0.8710 | 0.9333 | | No log | 5.9167 | 284 | 0.8661 | 0.4310 | 0.8661 | 0.9306 | | No log | 5.9583 | 286 | 0.8013 | 0.4714 | 0.8013 | 0.8952 | | No log | 6.0 | 288 | 0.7552 | 0.4748 | 0.7552 | 0.8690 | | No log | 6.0417 | 290 | 0.7615 | 0.4748 | 0.7615 | 0.8726 | | No log | 6.0833 | 292 | 0.7995 | 0.4461 | 0.7995 | 0.8941 | | No log | 6.125 | 294 | 0.8797 | 0.4054 | 0.8797 | 0.9379 | | No log | 6.1667 | 296 | 0.9766 | 0.4197 | 0.9766 | 0.9882 | | No log | 6.2083 | 298 | 0.9645 | 0.3953 | 0.9645 | 0.9821 | | No log | 6.25 | 300 | 0.8797 | 0.4558 | 0.8797 | 0.9379 | | No log | 6.2917 | 302 | 0.7774 | 0.5305 | 0.7774 | 0.8817 | | No log | 6.3333 | 304 | 0.7073 | 0.4883 | 0.7073 | 0.8410 | | No log | 6.375 | 306 | 0.6890 | 0.5023 | 0.6890 | 0.8300 | | No log | 6.4167 | 308 | 0.6824 | 0.5129 | 0.6824 | 0.8261 | | No log | 6.4583 | 310 | 0.7034 | 0.5676 | 0.7034 | 0.8387 | | No log | 6.5 | 312 | 0.7455 | 0.5663 | 0.7455 | 0.8634 | | No log | 6.5417 | 314 | 0.7167 | 0.5880 | 0.7167 | 0.8466 | | No log | 6.5833 | 316 | 0.6949 | 0.5510 | 0.6949 | 0.8336 | | No log | 6.625 | 318 | 0.7036 | 0.5950 | 0.7036 | 0.8388 | | No log | 6.6667 | 320 | 0.7000 | 0.5510 | 0.7000 | 0.8367 | | No log | 6.7083 | 322 | 0.7009 | 0.5510 | 0.7009 | 0.8372 | | No log | 6.75 | 324 | 0.7073 | 0.5342 | 0.7073 | 0.8410 | | No log | 6.7917 | 326 | 0.7462 | 0.5599 | 0.7462 | 0.8638 | | No log | 6.8333 | 328 | 0.8446 | 0.5735 | 0.8446 | 0.9190 | | No log | 6.875 | 330 | 0.8554 | 0.4794 | 0.8554 | 0.9249 | | No log | 6.9167 | 332 | 0.7694 | 0.5964 | 0.7694 | 0.8772 | | No log | 6.9583 | 334 | 0.7019 | 0.4903 | 0.7019 | 0.8378 | | No log | 7.0 | 336 | 0.7305 | 0.4765 | 0.7305 | 0.8547 | | No log | 7.0417 | 338 | 0.7227 | 0.4641 | 0.7227 | 0.8501 | | No log | 7.0833 | 340 | 0.6865 | 0.5033 | 0.6865 | 0.8286 | | No log | 7.125 | 342 | 0.7616 | 0.4958 | 0.7616 | 0.8727 | | No log | 7.1667 | 344 | 0.8859 | 0.4216 | 0.8859 | 0.9412 | | No log | 7.2083 | 346 | 0.9132 | 0.4216 | 0.9132 | 0.9556 | | No log | 7.25 | 348 | 0.8983 | 0.3539 | 0.8983 | 0.9478 | | No log | 7.2917 | 350 | 0.8823 | 0.3222 | 0.8823 | 0.9393 | | No log | 7.3333 | 352 | 0.8383 | 0.3169 | 0.8383 | 0.9156 | | No log | 7.375 | 354 | 0.7845 | 0.4309 | 0.7845 | 0.8857 | | No log | 7.4167 | 356 | 0.7686 | 0.4576 | 0.7686 | 0.8767 | | No log | 7.4583 | 358 | 0.7825 | 0.5242 | 0.7825 | 0.8846 | | No log | 7.5 | 360 | 0.7832 | 0.5666 | 0.7832 | 0.8850 | | No log | 7.5417 | 362 | 0.8219 | 0.5636 | 0.8219 | 0.9066 | | No log | 7.5833 | 364 | 0.7890 | 0.6071 | 0.7890 | 0.8883 | | No log | 7.625 | 366 | 0.7070 | 0.5923 | 0.7070 | 0.8408 | | No log | 7.6667 | 368 | 0.6793 | 0.5678 | 0.6793 | 0.8242 | | No log | 7.7083 | 370 | 0.6742 | 0.5833 | 0.6742 | 0.8211 | | No log | 7.75 | 372 | 0.6843 | 0.5033 | 0.6843 | 0.8272 | | No log | 7.7917 | 374 | 0.6911 | 0.5018 | 0.6911 | 0.8313 | | No log | 7.8333 | 376 | 0.7138 | 0.5865 | 0.7138 | 0.8449 | | No log | 7.875 | 378 | 0.7463 | 0.5601 | 0.7463 | 0.8639 | | No log | 7.9167 | 380 | 0.7405 | 0.5601 | 0.7405 | 0.8605 | | No log | 7.9583 | 382 | 0.7043 | 0.6082 | 0.7043 | 0.8392 | | No log | 8.0 | 384 | 0.6661 | 0.5835 | 0.6661 | 0.8161 | | No log | 8.0417 | 386 | 0.6714 | 0.5629 | 0.6714 | 0.8194 | | No log | 8.0833 | 388 | 0.6763 | 0.5629 | 0.6763 | 0.8224 | | No log | 8.125 | 390 | 0.6741 | 0.5629 | 0.6741 | 0.8210 | | No log | 8.1667 | 392 | 0.6892 | 0.5304 | 0.6892 | 0.8302 | | No log | 8.2083 | 394 | 0.6887 | 0.5442 | 0.6887 | 0.8299 | | No log | 8.25 | 396 | 0.6785 | 0.5171 | 0.6785 | 0.8237 | | No log | 8.2917 | 398 | 0.6890 | 0.5549 | 0.6890 | 0.8300 | | No log | 8.3333 | 400 | 0.7006 | 0.5003 | 0.7006 | 0.8370 | | No log | 8.375 | 402 | 0.7508 | 0.5429 | 0.7508 | 0.8665 | | No log | 8.4167 | 404 | 0.7498 | 0.5429 | 0.7498 | 0.8659 | | No log | 8.4583 | 406 | 0.7334 | 0.5245 | 0.7334 | 0.8564 | | No log | 8.5 | 408 | 0.7353 | 0.5127 | 0.7353 | 0.8575 | | No log | 8.5417 | 410 | 0.7294 | 0.4660 | 0.7294 | 0.8541 | | No log | 8.5833 | 412 | 0.7272 | 0.4660 | 0.7272 | 0.8527 | | No log | 8.625 | 414 | 0.7272 | 0.4660 | 0.7272 | 0.8527 | | No log | 8.6667 | 416 | 0.7308 | 0.4882 | 0.7308 | 0.8549 | | No log | 8.7083 | 418 | 0.7712 | 0.5728 | 0.7712 | 0.8782 | | No log | 8.75 | 420 | 0.7667 | 0.5666 | 0.7667 | 0.8756 | | No log | 8.7917 | 422 | 0.7521 | 0.5451 | 0.7521 | 0.8672 | | No log | 8.8333 | 424 | 0.7278 | 0.5002 | 0.7278 | 0.8531 | | No log | 8.875 | 426 | 0.7227 | 0.5247 | 0.7227 | 0.8501 | | No log | 8.9167 | 428 | 0.7265 | 0.5117 | 0.7265 | 0.8524 | | No log | 8.9583 | 430 | 0.7262 | 0.5131 | 0.7262 | 0.8522 | | No log | 9.0 | 432 | 0.7351 | 0.5033 | 0.7351 | 0.8574 | | No log | 9.0417 | 434 | 0.7263 | 0.4450 | 0.7263 | 0.8523 | | No log | 9.0833 | 436 | 0.7124 | 0.5050 | 0.7124 | 0.8440 | | No log | 9.125 | 438 | 0.7116 | 0.5050 | 0.7116 | 0.8436 | | No log | 9.1667 | 440 | 0.7240 | 0.5731 | 0.7240 | 0.8509 | | No log | 9.2083 | 442 | 0.7957 | 0.5650 | 0.7957 | 0.8920 | | No log | 9.25 | 444 | 0.8021 | 0.5447 | 0.8021 | 0.8956 | | No log | 9.2917 | 446 | 0.7347 | 0.5572 | 0.7347 | 0.8572 | | No log | 9.3333 | 448 | 0.6936 | 0.5622 | 0.6936 | 0.8328 | | No log | 9.375 | 450 | 0.6997 | 0.4118 | 0.6997 | 0.8365 | | No log | 9.4167 | 452 | 0.6979 | 0.4493 | 0.6979 | 0.8354 | | No log | 9.4583 | 454 | 0.6768 | 0.5288 | 0.6768 | 0.8227 | | No log | 9.5 | 456 | 0.6791 | 0.5259 | 0.6791 | 0.8241 | | No log | 9.5417 | 458 | 0.7070 | 0.6051 | 0.7070 | 0.8408 | | No log | 9.5833 | 460 | 0.7505 | 0.5938 | 0.7505 | 0.8663 | | No log | 9.625 | 462 | 0.8078 | 0.5628 | 0.8078 | 0.8988 | | No log | 9.6667 | 464 | 0.8080 | 0.5647 | 0.8080 | 0.8989 | | No log | 9.7083 | 466 | 0.7644 | 0.5487 | 0.7644 | 0.8743 | | No log | 9.75 | 468 | 0.7235 | 0.5823 | 0.7235 | 0.8506 | | No log | 9.7917 | 470 | 0.7142 | 0.5495 | 0.7142 | 0.8451 | | No log | 9.8333 | 472 | 0.7044 | 0.5627 | 0.7044 | 0.8393 | | No log | 9.875 | 474 | 0.6942 | 0.5862 | 0.6942 | 0.8332 | | No log | 9.9167 | 476 | 0.7021 | 0.4581 | 0.7021 | 0.8379 | | No log | 9.9583 | 478 | 0.7070 | 0.4949 | 0.7070 | 0.8408 | | No log | 10.0 | 480 | 0.7005 | 0.5169 | 0.7005 | 0.8369 | | No log | 10.0417 | 482 | 0.6892 | 0.4922 | 0.6892 | 0.8302 | | No log | 10.0833 | 484 | 0.7208 | 0.5380 | 0.7208 | 0.8490 | | No log | 10.125 | 486 | 0.7798 | 0.5170 | 0.7798 | 0.8830 | | No log | 10.1667 | 488 | 0.7857 | 0.5385 | 0.7857 | 0.8864 | | No log | 10.2083 | 490 | 0.7407 | 0.5482 | 0.7407 | 0.8606 | | No log | 10.25 | 492 | 0.6870 | 0.5025 | 0.6870 | 0.8288 | | No log | 10.2917 | 494 | 0.6793 | 0.5419 | 0.6793 | 0.8242 | | No log | 10.3333 | 496 | 0.6878 | 0.5405 | 0.6878 | 0.8293 | | No log | 10.375 | 498 | 0.7346 | 0.5380 | 0.7346 | 0.8571 | | 0.3005 | 10.4167 | 500 | 0.8043 | 0.5349 | 0.8043 | 0.8968 | | 0.3005 | 10.4583 | 502 | 0.8452 | 0.5167 | 0.8452 | 0.9194 | | 0.3005 | 10.5 | 504 | 0.8923 | 0.4681 | 0.8923 | 0.9446 | | 0.3005 | 10.5417 | 506 | 0.9333 | 0.3738 | 0.9333 | 0.9661 | | 0.3005 | 10.5833 | 508 | 0.9559 | 0.4539 | 0.9559 | 0.9777 | | 0.3005 | 10.625 | 510 | 0.8857 | 0.4809 | 0.8857 | 0.9411 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
QuantFactory/Mistral-7B-Instruct-v0.3-GGUF
QuantFactory
"2024-05-23T07:03:15Z"
198
3
transformers
[ "transformers", "gguf", "mistral", "text-generation", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2024-05-23T04:58:44Z"
--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.3 library_name: transformers pipeline_tag: text-generation tags: - mistral --- # Mistral-7B-Instruct-v0.3-GGUF - This is quantized version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) created using llama.cpp # Model Description The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3. Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md) - Extended vocabulary to 32768 - Supports v3 Tokenizer - Supports function calling ### Chat After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using ``` mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256 ``` ### Instruct following ```py from mistral_inference.model import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3") model = Transformer.from_folder(mistral_models_path) completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")]) tokens = tokenizer.encode_chat_completion(completion_request).tokens out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]) print(result) ``` ### Function calling ```py from mistral_common.protocol.instruct.tool_calls import Function, Tool from mistral_inference.model import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3") model = Transformer.from_folder(mistral_models_path) completion_request = ChatCompletionRequest( tools=[ Tool( function=Function( name="get_current_weather", description="Get the current weather", parameters={ "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "format": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use. Infer this from the users location.", }, }, "required": ["location", "format"], }, ) ) ], messages=[ UserMessage(content="What's the weather like today in Paris?"), ], ) tokens = tokenizer.encode_chat_completion(completion_request).tokens out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]) print(result) ``` ## Generate with `transformers` If you want to use Hugging Face `transformers` to generate text, you can do something like this. ```py from transformers import pipeline messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3") chatbot(messages) ``` ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
Selssabil/ModelRecommender-1741109530
Selssabil
"2025-03-04T18:18:08Z"
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "license:mit", "region:us" ]
null
"2025-03-04T18:18:02Z"
--- library_name: peft license: mit base_model: microsoft/phi-2 tags: - generated_from_trainer model-index: - name: ModelRecommender-1741109530 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ModelRecommender-1741109530 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5749 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7926 | 2.0 | 50 | 1.5457 | | 2.8971 | 4.0 | 100 | 1.5749 | ### Framework versions - PEFT 0.14.0 - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
cryotron/chatbot_academic_gemma_guff
cryotron
"2024-10-12T14:27:29Z"
12
0
transformers
[ "transformers", "gguf", "gemma", "text-generation-inference", "unsloth", "en", "base_model:unsloth/gemma-2b-it-bnb-4bit", "base_model:quantized:unsloth/gemma-2b-it-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-10-12T14:26:47Z"
--- base_model: unsloth/gemma-2b-it-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - gguf --- # Uploaded model - **Developed by:** cryotron - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ylacombe/musicgen-melody-large
ylacombe
"2024-02-06T12:37:41Z"
6
0
transformers
[ "transformers", "safetensors", "musicgen_melody", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
"2024-01-25T17:13:50Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Blitz-AI-ULTRA-GGUF
mradermacher
"2024-11-18T09:24:38Z"
0
1
transformers
[ "transformers", "merge", "mergekit", "lazymergekit", "en", "base_model:DenisTheDev/Blitz-AI-ULTRA", "base_model:finetune:DenisTheDev/Blitz-AI-ULTRA", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-03-27T03:14:09Z"
--- base_model: DenisTheDev/Blitz-AI-ULTRA language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher source_fix: convert --vocab-type bpe --pad-vocab tags: - merge - mergekit - lazymergekit --- ## About weighted/imatrix quants of https://huggingface.co/DenisTheDev/Blitz-AI-ULTRA <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Blitz-AI-ULTRA-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q2_K.gguf.part2of2) | Q2_K | 51.6 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.IQ3_XS.gguf.part2of2) | IQ3_XS | 56.7 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.IQ3_S.gguf.part2of2) | IQ3_S | 59.6 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q3_K_S.gguf.part2of2) | Q3_K_S | 59.6 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.IQ3_M.gguf.part2of2) | IQ3_M | 62.7 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q3_K_M.gguf.part2of2) | Q3_K_M | 66.3 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q3_K_L.gguf.part2of2) | Q3_K_L | 72.1 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.IQ4_XS.gguf.part2of2) | IQ4_XS | 73.1 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q4_0.gguf.part2of2) | Q4_0 | 76.5 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.IQ4_NL.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.IQ4_NL.gguf.part2of2) | IQ4_NL | 76.9 | prefer IQ4_XS | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q4_K_S.gguf.part2of2) | Q4_K_S | 77.0 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q4_K_M.gguf.part2of2) | Q4_K_M | 81.4 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q5_K_S.gguf.part2of2) | Q5_K_S | 92.3 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q5_K_M.gguf.part2of2) | Q5_K_M | 94.9 | | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q6_K.gguf.part3of3) | Q6_K | 109.2 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.Q8_0.gguf.part3of3) | Q8_0 | 139.9 | fast, best quality | | [P1](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.SOURCE.gguf.part01of11) [P2](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.SOURCE.gguf.part02of11) [P3](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.SOURCE.gguf.part03of11) [P4](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.SOURCE.gguf.part04of11) [P5](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.SOURCE.gguf.part05of11) [P6](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.SOURCE.gguf.part06of11) [P7](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.SOURCE.gguf.part07of11) [P8](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.SOURCE.gguf.part08of11) [P9](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.SOURCE.gguf.part09of11) [P10](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.SOURCE.gguf.part10of11) [P11](https://huggingface.co/mradermacher/Blitz-AI-ULTRA-GGUF/resolve/main/Blitz-AI-ULTRA.SOURCE.gguf.part11of11) | SOURCE | 512.6 | source gguf, only provided when it was hard to come by | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
acloudfan/opt-125m-gptq-8bit
acloudfan
"2024-10-25T13:44:31Z"
6
0
null
[ "safetensors", "opt", "arxiv:1910.09700", "8-bit", "gptq", "region:us" ]
null
"2024-10-08T12:58:01Z"
## Part of a course titled "Generative AI application design & development" https://genai.acloudfan.com/ --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This model was created with auto-gptq library. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Omar95farag/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-24_txt_vis_concat_enc_1_2_3_4_ramp
Omar95farag
"2024-01-15T09:54:04Z"
91
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "base_model:microsoft/layoutlmv3-base", "base_model:finetune:microsoft/layoutlmv3-base", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-08-24T06:30:12Z"
--- license: cc-by-nc-sa-4.0 base_model: microsoft/layoutlmv3-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-24_txt_vis_concat_enc_1_2_3_4_ramp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-08-24_txt_vis_concat_enc_1_2_3_4_ramp This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5649 - Accuracy: 0.7425 - Exit 0 Accuracy: 0.05 - Exit 1 Accuracy: 0.415 - Exit 2 Accuracy: 0.505 - Exit 3 Accuracy: 0.6 - Exit 4 Accuracy: 0.6425 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:| | No log | 0.96 | 16 | 2.6888 | 0.15 | 0.08 | 0.0625 | 0.0625 | 0.0625 | 0.0625 | | No log | 1.98 | 33 | 2.5275 | 0.225 | 0.08 | 0.105 | 0.0625 | 0.0625 | 0.0625 | | No log | 3.0 | 50 | 2.3637 | 0.3025 | 0.0775 | 0.11 | 0.0625 | 0.0625 | 0.0625 | | No log | 3.96 | 66 | 2.1405 | 0.38 | 0.075 | 0.1275 | 0.0625 | 0.0625 | 0.0625 | | No log | 4.98 | 83 | 1.8868 | 0.4975 | 0.075 | 0.11 | 0.0625 | 0.0625 | 0.0625 | | No log | 6.0 | 100 | 1.6298 | 0.59 | 0.075 | 0.12 | 0.0625 | 0.0625 | 0.0625 | | No log | 6.96 | 116 | 1.4167 | 0.64 | 0.0725 | 0.1225 | 0.0625 | 0.0625 | 0.0625 | | No log | 7.98 | 133 | 1.2772 | 0.67 | 0.075 | 0.1225 | 0.0625 | 0.0625 | 0.0625 | | No log | 9.0 | 150 | 1.1184 | 0.7325 | 0.075 | 0.125 | 0.0625 | 0.0625 | 0.0625 | | No log | 9.96 | 166 | 1.0215 | 0.7275 | 0.07 | 0.1225 | 0.0625 | 0.0625 | 0.07 | | No log | 10.98 | 183 | 0.9752 | 0.7525 | 0.07 | 0.12 | 0.0625 | 0.0625 | 0.07 | | No log | 12.0 | 200 | 0.9165 | 0.7425 | 0.07 | 0.13 | 0.0625 | 0.0625 | 0.0775 | | No log | 12.96 | 216 | 0.9352 | 0.7475 | 0.0725 | 0.1325 | 0.0625 | 0.0625 | 0.0775 | | No log | 13.98 | 233 | 0.9210 | 0.745 | 0.0725 | 0.13 | 0.0625 | 0.0625 | 0.0775 | | No log | 15.0 | 250 | 0.8671 | 0.775 | 0.075 | 0.1275 | 0.0625 | 0.0625 | 0.09 | | No log | 15.96 | 266 | 0.9380 | 0.7625 | 0.0725 | 0.1275 | 0.0625 | 0.0625 | 0.095 | | No log | 16.98 | 283 | 0.9594 | 0.77 | 0.0725 | 0.1225 | 0.0625 | 0.0625 | 0.1025 | | No log | 18.0 | 300 | 1.0292 | 0.745 | 0.0725 | 0.1275 | 0.0625 | 0.0625 | 0.1025 | | No log | 18.96 | 316 | 0.9903 | 0.755 | 0.07 | 0.13 | 0.08 | 0.0625 | 0.1075 | | No log | 19.98 | 333 | 1.0235 | 0.7725 | 0.065 | 0.1275 | 0.08 | 0.065 | 0.1175 | | No log | 21.0 | 350 | 1.0540 | 0.7675 | 0.0675 | 0.1175 | 0.09 | 0.0825 | 0.1275 | | No log | 21.96 | 366 | 1.1432 | 0.745 | 0.075 | 0.1375 | 0.0875 | 0.1175 | 0.1825 | | No log | 22.98 | 383 | 1.1439 | 0.75 | 0.0725 | 0.1575 | 0.0775 | 0.17 | 0.2475 | | No log | 24.0 | 400 | 1.2294 | 0.7325 | 0.07 | 0.21 | 0.12 | 0.1975 | 0.26 | | No log | 24.96 | 416 | 1.2759 | 0.73 | 0.07 | 0.1425 | 0.1325 | 0.1925 | 0.2725 | | No log | 25.98 | 433 | 1.1571 | 0.765 | 0.06 | 0.17 | 0.155 | 0.2475 | 0.3275 | | No log | 27.0 | 450 | 1.2853 | 0.7475 | 0.0575 | 0.205 | 0.185 | 0.275 | 0.3825 | | No log | 27.96 | 466 | 1.3344 | 0.7325 | 0.0475 | 0.2525 | 0.2575 | 0.3275 | 0.3875 | | No log | 28.98 | 483 | 1.2372 | 0.7475 | 0.06 | 0.2075 | 0.2325 | 0.32 | 0.4425 | | 1.7096 | 30.0 | 500 | 1.2672 | 0.7625 | 0.0525 | 0.2775 | 0.34 | 0.3825 | 0.4775 | | 1.7096 | 30.96 | 516 | 1.3086 | 0.7525 | 0.0525 | 0.3225 | 0.375 | 0.4425 | 0.51 | | 1.7096 | 31.98 | 533 | 1.3129 | 0.7525 | 0.0525 | 0.29 | 0.3825 | 0.4175 | 0.525 | | 1.7096 | 33.0 | 550 | 1.3782 | 0.735 | 0.0475 | 0.305 | 0.4075 | 0.4625 | 0.525 | | 1.7096 | 33.96 | 566 | 1.3449 | 0.735 | 0.0475 | 0.33 | 0.425 | 0.48 | 0.5425 | | 1.7096 | 34.98 | 583 | 1.4527 | 0.7325 | 0.045 | 0.34 | 0.435 | 0.4925 | 0.5475 | | 1.7096 | 36.0 | 600 | 1.4438 | 0.7275 | 0.05 | 0.3525 | 0.43 | 0.52 | 0.5425 | | 1.7096 | 36.96 | 616 | 1.5117 | 0.7275 | 0.045 | 0.3775 | 0.445 | 0.53 | 0.56 | | 1.7096 | 37.98 | 633 | 1.4637 | 0.735 | 0.0475 | 0.3925 | 0.445 | 0.5425 | 0.5675 | | 1.7096 | 39.0 | 650 | 1.5315 | 0.73 | 0.045 | 0.3875 | 0.4575 | 0.55 | 0.6 | | 1.7096 | 39.96 | 666 | 1.4396 | 0.74 | 0.05 | 0.39 | 0.4625 | 0.555 | 0.5975 | | 1.7096 | 40.98 | 683 | 1.4850 | 0.7425 | 0.05 | 0.39 | 0.455 | 0.5475 | 0.6025 | | 1.7096 | 42.0 | 700 | 1.4815 | 0.7525 | 0.05 | 0.3975 | 0.4625 | 0.5675 | 0.6 | | 1.7096 | 42.96 | 716 | 1.4511 | 0.7475 | 0.05 | 0.3975 | 0.4725 | 0.56 | 0.6175 | | 1.7096 | 43.98 | 733 | 1.5443 | 0.7275 | 0.05 | 0.3975 | 0.47 | 0.56 | 0.625 | | 1.7096 | 45.0 | 750 | 1.5364 | 0.725 | 0.05 | 0.3975 | 0.4825 | 0.5675 | 0.625 | | 1.7096 | 45.96 | 766 | 1.5455 | 0.7325 | 0.05 | 0.4 | 0.49 | 0.5675 | 0.625 | | 1.7096 | 46.98 | 783 | 1.4992 | 0.745 | 0.05 | 0.4 | 0.4875 | 0.58 | 0.62 | | 1.7096 | 48.0 | 800 | 1.5089 | 0.7375 | 0.05 | 0.4025 | 0.485 | 0.5825 | 0.6325 | | 1.7096 | 48.96 | 816 | 1.5149 | 0.7375 | 0.05 | 0.4025 | 0.4925 | 0.5875 | 0.63 | | 1.7096 | 49.98 | 833 | 1.5285 | 0.735 | 0.05 | 0.4025 | 0.5025 | 0.59 | 0.635 | | 1.7096 | 51.0 | 850 | 1.5455 | 0.73 | 0.05 | 0.4 | 0.4975 | 0.595 | 0.64 | | 1.7096 | 51.96 | 866 | 1.5598 | 0.7425 | 0.05 | 0.42 | 0.5 | 0.5975 | 0.64 | | 1.7096 | 52.98 | 883 | 1.5727 | 0.7325 | 0.05 | 0.4125 | 0.5 | 0.5925 | 0.64 | | 1.7096 | 54.0 | 900 | 1.5694 | 0.7425 | 0.05 | 0.415 | 0.495 | 0.5975 | 0.64 | | 1.7096 | 54.96 | 916 | 1.5760 | 0.735 | 0.05 | 0.415 | 0.5025 | 0.5975 | 0.64 | | 1.7096 | 55.98 | 933 | 1.5687 | 0.74 | 0.05 | 0.4125 | 0.5025 | 0.5975 | 0.6425 | | 1.7096 | 57.0 | 950 | 1.5648 | 0.74 | 0.05 | 0.415 | 0.5025 | 0.6 | 0.6425 | | 1.7096 | 57.6 | 960 | 1.5649 | 0.7425 | 0.05 | 0.415 | 0.505 | 0.6 | 0.6425 | ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3
mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF
mradermacher
"2024-08-02T09:56:52Z"
454
0
transformers
[ "transformers", "gguf", "en", "dataset:flammenai/FlameMix-DPO-v1", "dataset:flammenai/MahouMix-v1", "dataset:flammenai/Grill-Flammen-v1_chatML", "base_model:flammenai/Mahou-1.3-gemma2-9B", "base_model:quantized:flammenai/Mahou-1.3-gemma2-9B", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2024-07-03T07:14:05Z"
--- base_model: flammenai/Mahou-1.3-gemma2-9B datasets: - flammenai/FlameMix-DPO-v1 - flammenai/MahouMix-v1 - flammenai/Grill-Flammen-v1_chatML language: - en library_name: transformers license: gemma quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/flammenai/Mahou-1.3-gemma2-9B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Mahou-1.3-gemma2-9B-i1-GGUF/resolve/main/Mahou-1.3-gemma2-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
TOMFORD79/FreeTime_6
TOMFORD79
"2025-03-04T19:00:33Z"
0
0
null
[ "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
"2025-03-04T18:51:15Z"
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).