modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-15 06:27:42
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
521 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-15 06:27:26
card
stringlengths
11
1.01M
sepoul/charbel-first-experiment-tokenizer
sepoul
2025-04-25T09:56:32Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T09:56:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sepoul/charbel-first-experiment-model
sepoul
2025-04-25T09:56:30Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-25T09:48:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Finsocial/gemma-3-finetune
Finsocial
2025-04-25T09:54:42Z
23
0
transformers
[ "transformers", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "gemma3", "conversational", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T09:52:50Z
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Finsocial - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MinaMila/llama_instbase_unlearned_LoRa_Adult_ep5_22
MinaMila
2025-04-25T09:52:43Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T09:52:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Kenazin/all-roberta-large-v1-peft-p-tuning-3-1
Kenazin
2025-04-25T09:52:39Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T09:52:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
wsbagnsv1/SkyReels-V2-DF-1.3B-540P
wsbagnsv1
2025-04-25T09:52:01Z
0
0
gguf
[ "gguf", "video", "video-generation", "image-to-video", "base_model:Skywork/SkyReels-V2-DF-1.3B-540P", "base_model:quantized:Skywork/SkyReels-V2-DF-1.3B-540P", "license:apache-2.0", "region:us" ]
image-to-video
2025-04-25T09:42:31Z
--- license: apache-2.0 library_name: gguf base_model: - Skywork/SkyReels-V2-DF-1.3B-540P tags: - video - video-generation pipeline_tag: image-to-video --- This is a direct GGUF conversion of [Skywork/SkyReels-V2-DF-1.3B-540P](https://huggingface.co/Skywork/SkyReels-V2-DF-1.3B-540P) All quants are created from the FP32 base file, though I only uploaded the Q8_0 and less, if you want the F16 or BF16 one I would upload it per request. The model files can be used with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place model files in `ComfyUI/models/unet` - see the GitHub readme for further install instructions. The VAE can be downloaded from [this repository by Kijai](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) Please refer to [this chart](https://github.com/ggerganov/llama.cpp/blob/master/examples/perplexity/README.md#llama-3-8b-scoreboard) for a basic overview of quantization types. For conversion I used the conversion scripts from [city96](https://huggingface.co/city96)
jaymekoszut/sdcvsdc
jaymekoszut
2025-04-25T09:47:26Z
0
0
null
[ "license:bsd-2-clause", "region:us" ]
null
2025-04-25T09:47:26Z
--- license: bsd-2-clause ---
Szahriwar/Llama-3.2-3B-Instruct-bnb-4bit-q5-k-m
Szahriwar
2025-04-25T09:47:11Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-25T09:46:24Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Szahriwar - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Kam1qwe/Kam1lka
Kam1qwe
2025-04-25T09:46:51Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2025-04-25T09:46:51Z
--- license: artistic-2.0 ---
Kanda-Gangu-Chettri-7-2-Nepali-Video-link/VIRAL.Gangu.Chettri.Kanda.7.2.minute.Video.oficial.link
Kanda-Gangu-Chettri-7-2-Nepali-Video-link
2025-04-25T09:44:06Z
0
0
null
[ "region:us" ]
null
2025-04-25T09:43:23Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5n6bjbnr?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
lucasaltmann/7891035325618
lucasaltmann
2025-04-25T09:43:04Z
1
0
gondolize
[ "gondolize", "v8", "modelos", "model-index", "region:us" ]
null
2024-08-22T02:21:02Z
--- tags: - modelos library_name: gondolize library_version: 1.0.1 model-index: - name: lucasaltmann/7891035325618 results: - task: type: object-detection metrics: - type: precision value: 0.9950 name: [email protected](box) --- ## Métricas de Performance | Métrica | Valor | | ------- | ----- | | mAP50 | 0.9950 | | mAP50-95 | 0.7834 | | Precisão | 0.9924 | | Recall | 1.0000 | | Fitness | 0.8045 | | Total de imagens | 11 | | Total de objetos | 23 |
peterklein2308/bert-finetuned-ner
peterklein2308
2025-04-25T09:42:44Z
13
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2025-04-18T20:09:18Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: validation args: conll2003 metrics: - name: Precision type: precision value: 0.9343041535661095 - name: Recall type: recall value: 0.9501851228542578 - name: F1 type: f1 value: 0.9421777221526908 - name: Accuracy type: accuracy value: 0.9864749514334491 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0598 - Precision: 0.9343 - Recall: 0.9502 - F1: 0.9422 - Accuracy: 0.9865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0749 | 1.0 | 1756 | 0.0637 | 0.9165 | 0.9367 | 0.9265 | 0.9825 | | 0.035 | 2.0 | 3512 | 0.0644 | 0.9321 | 0.9473 | 0.9397 | 0.9855 | | 0.0218 | 3.0 | 5268 | 0.0598 | 0.9343 | 0.9502 | 0.9422 | 0.9865 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cu126 - Datasets 3.3.2 - Tokenizers 0.21.1
darkc0de/Llama-3.1-Nemotron-Nano-8B-v1-abliterated-Uncensored-Toxic-DPO-GGUF
darkc0de
2025-04-25T09:42:32Z
0
1
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "dataset:Undi95/toxic-dpo-v0.1-NoWarning", "base_model:huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated", "base_model:quantized:huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-25T07:10:39Z
--- base_model: - nvidia/Llama-3.1-Nemotron-Nano-8B-v1 - huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en datasets: - Undi95/toxic-dpo-v0.1-NoWarning --- **huihui-ai/Llama-3.1-Nemotron-Nano-8B-v1-abliterated** trained with **Unsloth ORPO** for 1 **full** epoch on **Undi95/toxic-dpo-v0.1-NoWarning** After testing, this model is still very censored. Dont wast your time here... Better alternatives available
TruongSinhAI/CAD_Qwen25_0.5B_Coder_85steps_2
TruongSinhAI
2025-04-25T09:41:56Z
0
0
transformers
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T09:41:52Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
cwaud/aa1c29e4-292a-4fcb-a82e-8c20dc74b39d
cwaud
2025-04-25T09:37:51Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0", "base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0", "license:llama3", "region:us" ]
null
2025-04-25T09:05:10Z
--- library_name: peft license: llama3 base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 tags: - axolotl - generated_from_trainer model-index: - name: aa1c29e4-292a-4fcb-a82e-8c20dc74b39d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 7acc08131dd9b62c_train_data.json ds_type: json format: custom path: /workspace/input_data/7acc08131dd9b62c_train_data.json type: field_input: input field_instruction: instruction field_output: chosen format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: cwaud/aa1c29e4-292a-4fcb-a82e-8c20dc74b39d hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/7acc08131dd9b62c_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: cb2a32ee-af60-47cd-b15b-c11f8a7e8f21 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: cb2a32ee-af60-47cd-b15b-c11f8a7e8f21 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # aa1c29e4-292a-4fcb-a82e-8c20dc74b39d This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3494 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5107 | 0.0001 | 1 | 0.6221 | | 0.6497 | 0.0002 | 3 | 0.6147 | | 0.626 | 0.0005 | 6 | 0.4790 | | 0.3619 | 0.0007 | 9 | 0.3494 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
NotTheStallion/Qwen2.5-0.20B-layer-reduced
NotTheStallion
2025-04-25T09:35:15Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T09:34:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dgambettaphd/M_llm3_gen8_run0_X_doc1000_synt64_tot128_FRESH
dgambettaphd
2025-04-25T09:34:59Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T09:34:41Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
daishen/openfin-0.5B-ZH-optimal-sft_lxl3129_audit_regulation
daishen
2025-04-25T09:31:58Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T09:10:13Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NotTheStallion/Qwen2.5-0.24B-layer-reduced
NotTheStallion
2025-04-25T09:30:11Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T09:29:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Achuka/Deeplab-segmentation
Achuka
2025-04-25T09:29:56Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-25T09:29:55Z
--- license: apache-2.0 ---
uiovasot/piano_llama_v5
uiovasot
2025-04-25T09:26:15Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "sft", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-25T09:09:35Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** uiovasot - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Szahriwar/Llama-3.2-3B-Instruct-bnb-4bit-elife-lora
Szahriwar
2025-04-25T09:25:53Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-25T09:25:31Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Szahriwar - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Duakovui/viT5_skype_bot_v1.5
Duakovui
2025-04-25T09:25:35Z
48
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-25T09:24:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bhavinjawade/gemma-12b-tq-model
bhavinjawade
2025-04-25T09:22:08Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-4b-it", "base_model:finetune:google/gemma-3-4b-it", "endpoints_compatible", "region:us" ]
null
2025-04-25T08:33:34Z
--- base_model: google/gemma-3-4b-it library_name: transformers model_name: gemma-12b-tq-model tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-12b-tq-model This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="bhavinjawade/gemma-12b-tq-model", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.50.0.dev0 - Pytorch: 2.7.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
efficientscaling/Z1-Shortest-7B
efficientscaling
2025-04-25T09:21:21Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T09:19:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kkks05/Llama-3.2-3B_lora_spider
kkks05
2025-04-25T09:19:43Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-25T09:19:29Z
--- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** kkks05 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Odogwu001/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_barky_albatross
Odogwu001
2025-04-25T09:19:11Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am humming barky albatross", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T08:17:42Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_barky_albatross tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am humming barky albatross - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_barky_albatross This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Odogwu001/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_barky_albatross", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Amyww/54555
Amyww
2025-04-25T09:18:34Z
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
2025-04-25T09:18:34Z
--- license: artistic-2.0 ---
Amyww/5455
Amyww
2025-04-25T09:18:00Z
0
0
null
[ "license:bigcode-openrail-m", "region:us" ]
null
2025-04-25T09:18:00Z
--- license: bigcode-openrail-m ---
Shekharmeena/shona_TTS_finetuned
Shekharmeena
2025-04-25T09:16:43Z
0
0
transformers
[ "transformers", "safetensors", "vits", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
2025-04-25T09:16:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DavieLion/output_iter0_ckpt_temperature
DavieLion
2025-04-25T09:16:37Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "alignment-handbook", "generated_from_trainer", "conversational", "dataset:new_data_temperature/iter0", "base_model:meta-llama/Llama-3.2-1B", "base_model:finetune:meta-llama/Llama-3.2-1B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T09:03:00Z
--- library_name: transformers base_model: meta-llama/Llama-3.2-1B tags: - alignment-handbook - generated_from_trainer datasets: - new_data_temperature/iter0 model-index: - name: iter0-ckpt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # iter0-ckpt This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the new_data_temperature/iter0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 6.0 ### Training results ### Framework versions - Transformers 4.45.0 - Pytorch 2.1.2+cu121 - Datasets 3.2.0 - Tokenizers 0.20.3
jjeccles/SJHotpotfilter0425R4-chatonly
jjeccles
2025-04-25T09:16:35Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T09:16:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
easygoing0114/AI_upscalers
easygoing0114
2025-04-25T09:13:33Z
0
0
null
[ "onnx", "art", "region:us" ]
null
2025-04-24T13:59:10Z
--- tags: - art --- # AI Upscalers This repository collects various AI upscaling models for image enhancement. Each model inherits its original license, which must be respected. Please review the license details before use, especially for commercial purposes. ## Models | Model | Type | License | Commercial Use | Features | Recommended | | --- | --- | --- | --- | --- | --- | | RealESRGAN_x4plus | ESRGAN | BSD 3-Clause | ✅ | Balanced | ✅ | | RealESRGAN_x4plus_anime_6B | ESRGAN | BSD 3-Clause | ✅ | Anime Style | ✅ | | 4x-AnimeSharp | ESRGAN | CC-BY-NC-SA-4.0 | ❌ | Sharp | | | 4x-UltraSharp_150000 | ESRGAN | CC-BY-NC-SA-4.0 | ❌ | Sharp | | | 4x_foolhardy_Remacri_210000 | ESRGAN | CC-BY-NC-SA-4.0 | ❌ | Sharp | | | 4x_fatal_Anime_500000_G | ESRGAN | CC-BY-NC-SA-4.0 | ❌ | | | | 4x_IllustrationJaNai_V1_ESRGAN_135k | ESRGAN | CC-BY-NC-SA-4.0 | ❌ | Anime Style | ✅ | | 4x_NMKD-Superscale-SP_178000_G | ESRGAN | WTFPL | ✅ | Balanced | | | 4x-NMKD-YandereNeo_320k | ESRGAN | WTFPL | ✅ | Balanced | | | 4x_NMKD-YandereNeoXL_200k | ESRGAN | WTFPL | ✅ | Balanced | ✅ | | 4x_escale_100000_G | ESRGAN | WTFPL | ✅ | | | | 4x_RealisticRescaler_100000_G | ESRGAN | WTFPL | ✅ | Natural | ✅ | | 4x PSNR_Pretrained | ESRGAN | Apache-2.0 | ✅ | | | | 4x_UniversalUpscalerV2-Neutral_115000_G | ESRGAN | WTFPL | ✅ | | | | 4x_UniversalUpscalerV2-Sharper_103000_G | ESRGAN | WTFPL | ✅ | | | | 4x_UniversalUpscalerV2-Sharp_101000_G | ESRGAN | WTFPL | ✅ | | | | 4x-PBRify_RPLKSRd_V3_160000 | PLKSR | CC0-1.0 | ✅ | | | | OmniSR_X4_DIV2K | OmniSR | Apache-2.0 | ✅ | | | | 4x-SwinIR-L_GAN | SwinIR | Apache-2.0 | ✅ | | | | 4x-SwinIR-L_PNSR | SwinIR | Apache-2.0 | ✅ | | | | 4xNomos2_hq_drct-l_200000 | DRCT | CC-BY-4.0 | ✅ | | | | 4x_IllustrationJaNai_V1_DAT2_190k | DAT | CC-BY-NC-SA-4.0 | ❌ | Anime Style | | | 4xNomos2_hq_dat2_140000 | DAT | CC-BY-4.0 | ✅ | Natural | | | 4xNomos8kDAT_110000 | DAT | CC-BY-4.0 | ✅ | Natural | | | 4xNomos8kHAT-L_otf_220000 | HAT | CC-BY-4.0 | ✅ | Natural | | ## OpenModelDB Links - [RealESRGAN_x4plus](https://openmodeldb.info/models/4x-realesrgan-x4plus) - [RealESRGAN_x4Plus Anime 6B](https://openmodeldb.info/models/4x-realesrgan-x4plus-anime-6b) - [4x_AnimeSharp](https://openmodeldb.info/models/4x-AnimeSharp) - [4x-UltraSharp_150000](https://openmodeldb.info/models/4x-UltraSharp) - [4x_foolhardy_Remacri_210000](https://openmodeldb.info/models/4x-Remacri) - [4x_fatal_Anime_500000_G](https://openmodeldb.info/models/4x-Fatal-Anime) - [IllustrationJaNai_V1_ESRGAN_135k](https://openmodeldb.info/models/4x-IllustrationJaNai-V1-ESRGAN) - [4x_NMKD-Superscale-SP_178000_G](https://openmodeldb.info/models/4x-NMKD-Superscale) - [4x-NMKD-YandereNeo_320k](https://openmodeldb.info/models/4x-NMKD-YandereNeo) - [4x_NMKD-YandereNeoXL_200k](https://openmodeldb.info/models/4x-NMKD-YandereNeo-XL) - [4x_escale_100000_G](https://openmodeldb.info/models/4x-escale) - [4x_RealisticRescaler_100000_G](https://openmodeldb.info/models/4x-RealisticRescaler) - [4x PSNR Pretrained](https://openmodeldb.info/models/4x-PSNR) - [4x_UniversalUpscalerV2-Neutral_115000_G](https://openmodeldb.info/models/4x-UniversalUpscalerV2-Neutral) - [4x_UniversalUpscalerV2-Sharper_103000_G](https://openmodeldb.info/models/4x-UniversalUpscalerV2-Sharper) - [4x_UniversalUpscalerV2-Sharp_101000_G](https://openmodeldb.info/models/4x-UniversalUpscalerV2-Sharp) - [4x-PBRify_RPLKSRd_V3_160000](https://openmodeldb.info/models/4x-PBRify-RPLKSRd-V3) - [OmniSR_X4_DIV2K](https://openmodeldb.info/models/4x-OmniSR-DIV2K) - [4x-SwinIR-L_GAN](https://github.com/JingyunLiang/SwinIR/releases/tag/v0.0) - [4x-SwinIR-L_PNSR](https://github.com/JingyunLiang/SwinIR/releases/tag/v0.0) - [4xNomos2_hq_drct-l_200000](https://openmodeldb.info/models/4x-Nomos2-hq-drct-l) - [IllustrationJaNai_V1_DAT2_190k](https://openmodeldb.info/models/4x-IllustrationJaNai-V1-DAT2) - [4xNomos2_hq_dat2_140000](https://openmodeldb.info/models/4x-Nomos2-hq-dat2) - [4xNomos8kDAT_110000](https://openmodeldb.info/models/4x-Nomos8kDAT) - [4xNomos8kHAT-L_otf_220000](https://openmodeldb.info/models/4x-Nomos8kHAT-L-otf) ## Comparison for Anime Illustrations (External Site) - [Comparison image](https://www.ai-image-journey.com/p/upscale-model.html) - [Guide](https://www.ai-image-journey.com/2025/04/ai-upscale-hires-fix.html) ## Licenses The following licenses apply to the models in this repository, listed from most restrictive to least restrictive: | License | Description | Restrictions | Original License Text | | --- | --- | --- | --- | | [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) | Non-commercial use only, must share under the same license. | Non-commercial, same license sharing | [CC-BY-NC-SA-4.0 Legal Code](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode) | | [BSD 3-Clause](https://opensource.org/licenses/BSD-3-Clause) | Requires copyright notice and disclaimer. | Copyright notice, disclaimer | [BSD 3-Clause License](https://opensource.org/licenses/BSD-3-Clause) | | [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) | Requires copyright notice and change log. | Copyright notice, change log | [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt) | | [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) | Requires attribution. | Attribution | [CC-BY-4.0 Legal Code](https://creativecommons.org/licenses/by/4.0/legalcode) | | [CC0-1.0](https://creativecommons.org/publicdomain/zero/1.0/) | Public domain, no restrictions. | None | [CC0-1.0 Legal Code](https://creativecommons.org/publicdomain/zero/1.0/legalcode) | | [WTFPL](http://www.wtfpl.net/) | Do whatever you want. | None | [WTFPL License](http://www.wtfpl.net/txt/copying/) |
AI-Enthusiast11/mistral-7b-4bit-pii-entity-extractor
AI-Enthusiast11
2025-04-25T09:11:59Z
0
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "base_model:quantized:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-24T21:52:46Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** AI-Enthusiast11 - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
efficientscaling/Z1-Longest-7B
efficientscaling
2025-04-25T09:11:57Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T09:10:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
salunaalavi/bert-based-summarize
salunaalavi
2025-04-25T09:11:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-24T14:18:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lasion/gemma-3
Lasion
2025-04-25T09:10:39Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3_text", "trl", "en", "base_model:unsloth/gemma-3-1b-it", "base_model:finetune:unsloth/gemma-3-1b-it", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-25T09:10:31Z
--- base_model: unsloth/gemma-3-1b-it tags: - text-generation-inference - transformers - unsloth - gemma3_text - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Lasion - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sonquan/55
sonquan
2025-04-25T09:09:11Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-04-25T09:09:10Z
--- license: creativeml-openrail-m ---
importcjj/financial_classification
importcjj
2025-04-25T09:08:43Z
0
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "dataset:dataset", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-25T09:03:41Z
--- library_name: transformers tags: - generated_from_trainer datasets: - dataset model-index: - name: model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model This model was trained from scratch on the dataset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.0 - Tokenizers 0.21.1
mustaphounii04/smol-captioner
mustaphounii04
2025-04-25T09:05:35Z
5
1
peft
[ "peft", "tensorboard", "safetensors", "arxiv:1910.09700", "base_model:HuggingFaceTB/SmolVLM-Base", "base_model:adapter:HuggingFaceTB/SmolVLM-Base", "region:us" ]
null
2025-04-11T18:42:20Z
--- base_model: HuggingFaceTB/SmolVLM-Base library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> THIS MODEL IS EXCLUSIVELY FINETUNED TO CAPTION FOOD IMAGES. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
Neobozrim/llama-3-1-8b-emotionally-framed-deployable
Neobozrim
2025-04-25T09:05:26Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T09:01:21Z
--- base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Neobozrim - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mlfoundations-dev/b2_science_length_gpt4omini_10k
mlfoundations-dev
2025-04-25T09:05:01Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T00:38:01Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: b2_science_length_gpt4omini_10k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # b2_science_length_gpt4omini_10k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_length_gpt4omini_10k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
marzieh-maleki/defeasible-snli-t5-small-strengthener-tuned
marzieh-maleki
2025-04-25T09:04:31Z
0
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "trl", "sft", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-25T09:04:16Z
--- base_model: google-t5/t5-small library_name: transformers model_name: defeasible-snli-t5-small-strengthener-tuned tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for defeasible-snli-t5-small-strengthener-tuned This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="marzieh-maleki/defeasible-snli-t5-small-strengthener-tuned", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/marzieh-maleki-ghent-university/def_nli_baselines_sep/runs/eqqsqqc3) This model was trained with SFT. ### Framework versions - TRL: 0.14.0 - Transformers: 4.48.2 - Pytorch: 2.6.0 - Datasets: 2.21.0 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
LocalDoc/private_ner_azerbaijani_v2
LocalDoc
2025-04-25T09:00:10Z
0
0
null
[ "safetensors", "xlm-roberta", "personally identifiable information", "pii", "ner", "azerbaijan", "token-classification", "az", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:cc-by-4.0", "region:us" ]
token-classification
2025-04-25T04:35:30Z
--- license: cc-by-4.0 language: - az base_model: - FacebookAI/xlm-roberta-base pipeline_tag: token-classification tags: - personally identifiable information - pii - ner - azerbaijan --- # PII NER Azerbaijani v2 **PII NER Azerbaijani** is a second version of fine-tuned Named Entity Recognition (NER) model (First version: <a target="_blank" href="https://huggingface.co/LocalDoc/private_ner_azerbaijani">PII NER Azerbaijani</a>) based on XLM-RoBERTa. It is trained on Azerbaijani pii data for classification personally identifiable information such as names, dates of birth, cities, addresses, and phone numbers from text. ## Model Details - **Base Model:** XLM-RoBERTa - **Training Metrics:** - | Epoch | Training Loss | Validation Loss | Precision | Recall | F1 | |-------|----------------|------------------|-----------|---------|----------| | 1 | 0.029100 | 0.025319 | 0.963367 | 0.962449| 0.962907 | | 2 | 0.019900 | 0.023291 | 0.964567 | 0.968474| 0.966517 | | 3 | 0.015400 | 0.018993 | 0.969536 | 0.967555| 0.968544 | | 4 | 0.012700 | 0.017730 | 0.971919 | 0.969768| 0.970842 | | 5 | 0.011100 | 0.018095 | 0.973056 | 0.970075| 0.971563 | - **Test Metrics:** - **Precision:** 0.9760 - **Recall:** 0.9732 - **F1 Score:** 0.9746 ## Detailed Test Classification Report | Entity | Precision | Recall | F1-score | Support | |---------------------|-----------|--------|----------|---------| | AGE | 0.98 | 0.98 | 0.98 | 509 | | BUILDINGNUM | 0.97 | 0.75 | 0.85 | 1285 | | CITY | 1.00 | 1.00 | 1.00 | 2100 | | CREDITCARDNUMBER | 0.99 | 0.98 | 0.99 | 249 | | DATE | 0.85 | 0.92 | 0.88 | 1576 | | DRIVERLICENSENUM | 0.98 | 0.98 | 0.98 | 258 | | EMAIL | 0.98 | 1.00 | 0.99 | 1485 | | GIVENNAME | 0.99 | 1.00 | 0.99 | 9926 | | IDCARDNUM | 0.99 | 0.99 | 0.99 | 1174 | | PASSPORTNUM | 0.99 | 0.99 | 0.99 | 426 | | STREET | 0.94 | 0.98 | 0.96 | 1480 | | SURNAME | 1.00 | 1.00 | 1.00 | 3357 | | TAXNUM | 0.99 | 1.00 | 0.99 | 240 | | TELEPHONENUM | 0.97 | 0.95 | 0.96 | 2175 | | TIME | 0.96 | 0.96 | 0.96 | 2216 | | ZIPCODE | 0.97 | 0.97 | 0.97 | 520 | ### Averages | Metric | Precision | Recall | F1-score | Support | |---------------|-----------|--------|----------|---------| | **Micro avg** | 0.98 | 0.97 | 0.97 | 28976 | | **Macro avg** | 0.97 | 0.96 | 0.97 | 28976 | | **Weighted avg** | 0.98 | 0.97 | 0.97 | 28976 | ## A list of entities that the model is able to recognize. ```python [ "AGE", "BUILDINGNUM", "CITY", "CREDITCARDNUMBER", "DATE", "DRIVERLICENSENUM", "EMAIL", "GIVENNAME", "IDCARDNUM", "PASSPORTNUM", "STREET", "SURNAME", "TAXNUM", "TELEPHONENUM", "TIME", "ZIPCODE" ] ``` ## Usage To use the model for spell correction: The model is trained to work with lowercase text. This code automatically normalizes the text. If you use custom code, keep this in mind. ```python import torch from transformers import AutoModelForTokenClassification, XLMRobertaTokenizerFast import numpy as np from typing import List, Dict, Tuple class AzerbaijaniNER: def __init__(self, model_name_or_path="LocalDoc/private_ner_azerbaijani_v2"): self.model = AutoModelForTokenClassification.from_pretrained(model_name_or_path) self.tokenizer = XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-base") self.model.eval() self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") self.model.to(self.device) self.id_to_label = { 0: "O", 1: "B-AGE", 2: "B-BUILDINGNUM", 3: "B-CITY", 4: "B-CREDITCARDNUMBER", 5: "B-DATE", 6: "B-DRIVERLICENSENUM", 7: "B-EMAIL", 8: "B-GIVENNAME", 9: "B-IDCARDNUM", 10: "B-PASSPORTNUM", 11: "B-STREET", 12: "B-SURNAME", 13: "B-TAXNUM", 14: "B-TELEPHONENUM", 15: "B-TIME", 16: "B-ZIPCODE", 17: "I-AGE", 18: "I-BUILDINGNUM", 19: "I-CITY", 20: "I-CREDITCARDNUMBER", 21: "I-DATE", 22: "I-DRIVERLICENSENUM", 23: "I-EMAIL", 24: "I-GIVENNAME", 25: "I-IDCARDNUM", 26: "I-PASSPORTNUM", 27: "I-STREET", 28: "I-SURNAME", 29: "I-TAXNUM", 30: "I-TELEPHONENUM", 31: "I-TIME", 32: "I-ZIPCODE" } self.entity_types = { "AGE": "Age", "BUILDINGNUM": "Building Number", "CITY": "City", "CREDITCARDNUMBER": "Credit Card Number", "DATE": "Date", "DRIVERLICENSENUM": "Driver License Number", "EMAIL": "Email", "GIVENNAME": "Given Name", "IDCARDNUM": "ID Card Number", "PASSPORTNUM": "Passport Number", "STREET": "Street", "SURNAME": "Surname", "TAXNUM": "Tax ID Number", "TELEPHONENUM": "Phone Number", "TIME": "Time", "ZIPCODE": "Zip Code" } def predict(self, text: str, max_length: int = 512) -> List[Dict]: text = text.lower() inputs = self.tokenizer( text, return_tensors="pt", max_length=max_length, padding="max_length", truncation=True, return_offsets_mapping=True ) offset_mapping = inputs.pop("offset_mapping").numpy()[0] inputs = {k: v.to(self.device) for k, v in inputs.items()} with torch.no_grad(): outputs = self.model(**inputs) predictions = outputs.logits.argmax(dim=2) predictions = predictions[0].cpu().numpy() entities = [] current_entity = None for idx, (offset, pred_id) in enumerate(zip(offset_mapping, predictions)): if offset[0] == 0 and offset[1] == 0: continue pred_label = self.id_to_label[pred_id] if pred_label.startswith("B-"): if current_entity: entities.append(current_entity) entity_type = pred_label[2:] current_entity = { "label": entity_type, "name": self.entity_types.get(entity_type, entity_type), "start": int(offset[0]), "end": int(offset[1]), "value": text[offset[0]:offset[1]] } elif pred_label.startswith("I-") and current_entity is not None: entity_type = pred_label[2:] if entity_type == current_entity["label"]: current_entity["end"] = int(offset[1]) current_entity["value"] = text[current_entity["start"]:current_entity["end"]] else: entities.append(current_entity) current_entity = None elif pred_label == "O" and current_entity is not None: entities.append(current_entity) current_entity = None if current_entity: entities.append(current_entity) return entities def anonymize_text(self, text: str, replacement_char: str = "X") -> Tuple[str, List[Dict]]: entities = self.predict(text) if not entities: return text, [] entities.sort(key=lambda x: x["start"], reverse=True) anonymized_text = text for entity in entities: start = entity["start"] end = entity["end"] length = end - start anonymized_text = anonymized_text[:start] + replacement_char * length + anonymized_text[end:] entities.sort(key=lambda x: x["start"]) return anonymized_text, entities def highlight_entities(self, text: str) -> str: entities = self.predict(text) if not entities: return text entities.sort(key=lambda x: x["start"], reverse=True) highlighted_text = text for entity in entities: start = entity["start"] end = entity["end"] entity_value = entity["value"] entity_type = entity["name"] highlighted_text = ( highlighted_text[:start] + f"[{entity_type}: {entity_value}]" + highlighted_text[end:] ) return highlighted_text if __name__ == "__main__": ner = AzerbaijaniNER() test_text = """Salam, mənim adım Əli Hüseynovdu. Doğum tarixim 15.05.1990-dır. Bakı şəhərində, 28 may küçəsi 4 ünvanında yaşayıram. Telefon nömrəm +994552345678-dir. Mən 4169741358254152 nömrəli kartdan ödəniş etmişəm. Sifarişim nə vaxt çatdırılcaq ?""" print("=== Original Text ===") print(test_text) print("\n=== Found Entities ===") entities = ner.predict(test_text) for entity in entities: print(f"{entity['name']}: {entity['value']} (positions {entity['start']}-{entity['end']})") print("\n=== Text with Highlighted Entities ===") highlighted_text = ner.highlight_entities(test_text) print(highlighted_text) print("\n=== Anonymized Text ===") anonymized_text, _ = ner.anonymize_text(test_text) print(anonymized_text) ``` ``` === Original Text === Salam, mənim adım Əli Hüseynovdu. Doğum tarixim 15.05.1990-dır. Bakı şəhərində, 28 may küçəsi 4 ünvanında yaşayıram. Telefon nömrəm +994552345678-dir. Mən 4169741358254152 nömrəli kartdan ödəniş etmişəm. Sifarişim nə vaxt çatdırılcaq ? === Found Entities === Given Name: əli (positions 18-21) Surname: hüseynov (positions 22-30) Date: 15.05.1990 (positions 48-58) City: bakı (positions 64-68) Street: 28 may küçəsi (positions 80-93) Building Number: 4 (positions 94-95) Phone Number: +994552345678 (positions 132-145) Credit Card Number: 4169741358254152 (positions 155-171) === Text with Highlighted Entities === Salam, mənim adım [Given Name: əli] [Surname: hüseynov]du. Doğum tarixim [Date: 15.05.1990]-dır. [City: bakı] şəhərində, [Street: 28 may küçəsi] [Building Number: 4] ünvanında yaşayıram. Telefon nömrəm [Phone Number: +994552345678]-dir. Mən [Credit Card Number: 4169741358254152] nömrəli kartdan ödəniş etmişəm. Sifarişim nə vaxt çatdırılcaq ? === Anonymized Text === Salam, mənim adım XXX XXXXXXXXdu. Doğum tarixim XXXXXXXXXX-dır. XXXX şəhərində, XXXXXXXXXXXXX X ünvanında yaşayıram. Telefon nömrəm XXXXXXXXXXXXX-dir. Mən XXXXXXXXXXXXXXXX nömrəli kartdan ödəniş etmişəm. Sifarişim nə vaxt çatdırılcaq ? ``` ## CC BY 4.0 License — What It Allows The **Creative Commons Attribution 4.0 International (CC BY 4.0)** license allows: ### ✅ You Can: - **Use** the model for any purpose, including commercial use. - **Share** it — copy and redistribute in any medium or format. - **Adapt** it — remix, transform, and build upon it for any purpose, even commercially. ### 📝 You Must: - **Give appropriate credit** — Attribute the original creator (e.g., name, link to the license, and indicate if changes were made). - **Not imply endorsement** — Do not suggest the original author endorses you or your use. ### ❌ You Cannot: - Apply legal terms or technological measures that legally restrict others from doing anything the license permits (no DRM or additional restrictions). ### Summary: You are free to use, modify, and distribute the model — even for commercial purposes — as long as you give proper credit to the original creator. For more information, please refer to the <a target="_blank" href="https://creativecommons.org/licenses/by/4.0/deed.en">CC BY 4.0 license</a>. ## Contact For more information, questions, or issues, please contact LocalDoc at [[email protected]].
Culturedniichan/mergekit-ties-yynxkwc
Culturedniichan
2025-04-25T08:58:58Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4", "base_model:merge:ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4", "base_model:ReadyArt/Forgotten-Safeword-24B-V2.2", "base_model:merge:ReadyArt/Forgotten-Safeword-24B-V2.2", "base_model:TroyDoesAI/BlackSheep-24B", "base_model:merge:TroyDoesAI/BlackSheep-24B", "base_model:arcee-ai/Arcee-Blitz", "base_model:merge:arcee-ai/Arcee-Blitz", "base_model:unsloth/Mistral-Small-24B-Instruct-2501", "base_model:merge:unsloth/Mistral-Small-24B-Instruct-2501", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T08:45:41Z
--- base_model: - unsloth/Mistral-Small-24B-Instruct-2501 - ReadyArt/Forgotten-Safeword-24B-V2.2 - ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4 - arcee-ai/Arcee-Blitz - TroyDoesAI/BlackSheep-24B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Mistral-Small-24B-Instruct-2501](https://huggingface.co/unsloth/Mistral-Small-24B-Instruct-2501) as a base. ### Models Merged The following models were included in the merge: * [ReadyArt/Forgotten-Safeword-24B-V2.2](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-V2.2) * [ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4](https://huggingface.co/ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4) * [arcee-ai/Arcee-Blitz](https://huggingface.co/arcee-ai/Arcee-Blitz) * [TroyDoesAI/BlackSheep-24B](https://huggingface.co/TroyDoesAI/BlackSheep-24B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: unsloth/Mistral-Small-24B-Instruct-2501 - model: TroyDoesAI/BlackSheep-24B parameters: density: 0.50 weight: 0.60 - model: ReadyArt/Forgotten-Safeword-24B-V2.2 parameters: density: 0.35 weight: 0.15 - model: arcee-ai/Arcee-Blitz parameters: density: 0.15 # minimal edits survive weight: 0.05 # very low influence - model: ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4 parameters: density: 0.30 weight: 0.10 merge_method: ties base_model: unsloth/Mistral-Small-24B-Instruct-2501 parameters: normalize: true dtype: bfloat16 ```
vmpsergio/9463d9df-44d0-4b5b-a588-fc9f202b7e1d
vmpsergio
2025-04-25T08:57:26Z
0
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "region:us" ]
null
2025-04-25T08:52:27Z
--- library_name: peft license: other base_model: facebook/opt-350m tags: - axolotl - generated_from_trainer model-index: - name: 9463d9df-44d0-4b5b-a588-fc9f202b7e1d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: facebook/opt-350m bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 32cb49683e226f4d_train_data.json ds_type: json format: custom path: /workspace/input_data/32cb49683e226f4d_train_data.json type: field_input: author field_instruction: dynasty field_output: content format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: vmpsergio/9463d9df-44d0-4b5b-a588-fc9f202b7e1d hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/32cb49683e226f4d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a4199bed-2854-4046-9e07-45f55e8274f5 wandb_project: s56-2 wandb_run: your_name wandb_runid: a4199bed-2854-4046-9e07-45f55e8274f5 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 9463d9df-44d0-4b5b-a588-fc9f202b7e1d This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3061 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.3525 | 0.0078 | 200 | 3.3061 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
vermoney/98095d60-2493-4df6-b46c-06fd733298b9
vermoney
2025-04-25T08:56:22Z
0
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-25T08:53:03Z
--- library_name: peft license: other base_model: facebook/opt-350m tags: - axolotl - generated_from_trainer model-index: - name: 98095d60-2493-4df6-b46c-06fd733298b9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-350m bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 32cb49683e226f4d_train_data.json ds_type: json format: custom path: /workspace/input_data/32cb49683e226f4d_train_data.json type: field_input: author field_instruction: dynasty field_output: content format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: vermoney/98095d60-2493-4df6-b46c-06fd733298b9 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/32cb49683e226f4d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a4199bed-2854-4046-9e07-45f55e8274f5 wandb_project: s56-9 wandb_run: your_name wandb_runid: a4199bed-2854-4046-9e07-45f55e8274f5 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 98095d60-2493-4df6-b46c-06fd733298b9 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.4678 | 0.0078 | 200 | 3.3708 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
isaiahbjork/poker-reasoning-14b
isaiahbjork
2025-04-25T08:55:44Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T08:49:16Z
--- base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** isaiahbjork - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
marialvsantiago/3b2ae3db-8ecd-4bde-bda6-3ddf98922d8e
marialvsantiago
2025-04-25T08:55:38Z
0
0
peft
[ "peft", "safetensors", "opt", "axolotl", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-25T08:52:16Z
--- library_name: peft license: other base_model: facebook/opt-350m tags: - axolotl - generated_from_trainer model-index: - name: 3b2ae3db-8ecd-4bde-bda6-3ddf98922d8e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: facebook/opt-350m bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 32cb49683e226f4d_train_data.json ds_type: json format: custom path: /workspace/input_data/32cb49683e226f4d_train_data.json type: field_input: author field_instruction: dynasty field_output: content format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: marialvsantiago/3b2ae3db-8ecd-4bde-bda6-3ddf98922d8e hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/32cb49683e226f4d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a4199bed-2854-4046-9e07-45f55e8274f5 wandb_project: s56-33 wandb_run: your_name wandb_runid: a4199bed-2854-4046-9e07-45f55e8274f5 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 3b2ae3db-8ecd-4bde-bda6-3ddf98922d8e This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3730 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.4704 | 0.0078 | 200 | 3.3730 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
sayed0am/cogito-v1-preview-qwen-32B-AWQ
sayed0am
2025-04-25T08:54:36Z
0
0
null
[ "safetensors", "qwen2", "base_model:deepcogito/cogito-v1-preview-qwen-32B", "base_model:quantized:deepcogito/cogito-v1-preview-qwen-32B", "license:apache-2.0", "4-bit", "awq", "region:us" ]
null
2025-04-25T08:42:15Z
--- license: apache-2.0 base_model: - deepcogito/cogito-v1-preview-qwen-32B tags: - qwen2 --- AWQ version of https://huggingface.co/deepcogito/cogito-v1-preview-qwen-32B
WwtortugaswW/imdb
WwtortugaswW
2025-04-25T08:54:08Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-24T20:36:40Z
--- library_name: transformers license: mit base_model: roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # imdb This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2536 - Accuracy: 0.9352 - F1: 0.9353 - Precision: 0.9338 - Recall: 0.9369 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1.5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.2879 | 1.0 | 3125 | 0.2721 | 0.9265 | 0.9255 | 0.9378 | 0.9135 | | 0.2124 | 1.5002 | 4688 | 0.2536 | 0.9352 | 0.9353 | 0.9338 | 0.9369 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
RyanL22/sapiens-bfloat16
RyanL22
2025-04-25T08:53:38Z
0
1
null
[ "base_model:facebook/sapiens", "base_model:finetune:facebook/sapiens", "license:mit", "region:us" ]
null
2025-04-25T08:37:22Z
--- license: mit base_model: - facebook/sapiens --- # Sapiens Exported Model (Schema 7.3) This repository provides a re-exported checkpoint of the [facebook/sapiens](https://huggingface.co/facebook/sapiens) segmentation model using **PyTorch 2.5.1**, ensuring compatibility with **modern `torch.export.load()` workflows**. --- ## Background The original SAPIENS checkpoints were exported in PyTorch 2.1.x and use **IR schema version `5.1`**, which causes `torch.export.load()` to fail on newer PyTorch versions (e.g., 2.2+), due to a mismatch in how versioning is handled internally. Many users encounter the following error: `ValueError: invalid literal for int() with base 10: b'5.1'` To address this, we provide a **re-exported checkpoint** using **PyTorch 2.5.1**, which uses **schema version `7.3`**, fully compatible with current and future versions of PyTorch. --- ## Contents - `..._bfloat16.pt2`: Re-exported IR checkpoint - Compatible with: `torch.export.load()` in **PyTorch ≥ 2.3.0** - Schema version: **7.3** --- ## How to Load ```python from torch.export import load from huggingface_hub import hf_hub_download model_path = hf_hub_download("RyanL22/sapiens-bfloat16", "pose/checkpoints/sapiens_1b_goliath_best_goliath_AP_639_bfloat16.pt2") model = load(model_path).module() ``` 🔧 Make sure you are using PyTorch 2.3.0 or higher to ensure schema 7.x compatibility. Credits Original model: facebook/sapiens Re-exported by: @RyanL22
yangjianhua/radar-1.5B-model
yangjianhua
2025-04-25T08:52:33Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T08:41:39Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hfendpoints-images/whisper-vllm-gpu
hfendpoints-images
2025-04-25T08:51:40Z
0
1
null
[ "inference_endpoints", "audio", "transcription", "automatic-speech-recognition", "base_model:openai/whisper-large-v3", "base_model:finetune:openai/whisper-large-v3", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-04-15T12:46:19Z
--- license: apache-2.0 pipeline_tag: automatic-speech-recognition base_model: - openai/whisper-large-v3 tags: - inference_endpoints - audio - transcription --- # Inference Endpoint - Multilingual Audio Transcription with Whisper models **Deploy OpenAI's Whisper Inference Endpoint to transcribe audio files to text in many languages** Resulting deployment exposes an [OpenAI Platform Transcription](https://platform.openai.com/docs/api-reference/audio/createTranscription) compatible HTTP endpoint which you can query using the `OpenAi` Libraries or directly through `cURL` for instance. ## Available Routes | path | description | |:-----------------------------|:--------------------------------------------------| | /api/v1/audio/transcriptions | Transcription endpoint to interact with the model | | /docs | Visual documentation | ## Getting started - **Getting text output from audio file** ```bash curl http://localhost:8000/api/v1/audio/transcriptions \ --request POST \ --header 'Content-Type: multipart/form-data' \ -F file=@</path/to/audio/file> \ -F "response_format": "text" ``` - **Getting JSON output from audio file** ```bash curl http://localhost:8000/api/v1/audio/transcriptions \ --request POST \ --header 'Content-Type: multipart/form-data' \ -F file=@</path/to/audio/file> \ -F "response_format": "json" ``` - **Getting segmented JSON output from audio file** ```bash curl http://localhost:8000/api/v1/audio/transcriptions \ --request POST \ --header 'Content-Type: multipart/form-data' \ -F file=@</path/to/audio/file> \ -F "response_format": "verbose_json" ``` ## Specifications | spec | value | description | |:------------------ |:---------------------:|:-----------------------------------------------------------------------------------------------------------| | Engine | vLLM (v0.8.3) | Underlying inference engine leverages [vLLM](https://docs.vllm.ai/en/latest/) | | Hardware | GPU (Ada Lovelace) | Requires the target endpoint to run over NVIDIA GPUs with at least compute capabilities 8.9 (Ada Lovelace) | | Compute data type | `bfloat16` | Computations (matmuls, norms, etc.) are done using `bfloat16` precision | | KV cache data type | `float8` (e4m3) | Key-Value cache is stored on the GPU using `float8` (`float8_e4m3`) precision to save space | | PyTorch Compile | ✅ | Enable the use of `torch.compile` to further optimize model's execution with more optimizations | | CUDA Graphs | ✅ | Enable the use of so called "[CUDA Graphs](https://developer.nvidia.com/blog/cuda-graphs/)" to reduce overhead executing GPU computations |
ishan24/test_modelopt_quant
ishan24
2025-04-25T08:48:51Z
0
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "modelopt", "region:us" ]
null
2025-04-25T08:46:11Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
prathameshkalamkar/gemma-2b-sql-finetuned
prathameshkalamkar
2025-04-25T08:45:27Z
0
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T08:43:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
deswaq/juh81
deswaq
2025-04-25T08:43:44Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T08:40:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dthrhdar11/gemma-law-prediction-finetune
dthrhdar11
2025-04-25T08:41:56Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-4b-pt", "base_model:finetune:google/gemma-3-4b-pt", "endpoints_compatible", "region:us" ]
null
2025-04-24T07:06:23Z
--- base_model: google/gemma-3-4b-pt library_name: transformers model_name: gemma-law-prediction-finetune tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-law-prediction-finetune This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dthrhdar11/gemma-law-prediction-finetune", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
karimievzal/dfvbdfvb
karimievzal
2025-04-25T08:40:26Z
0
0
null
[ "license:bsd-3-clause", "region:us" ]
null
2025-04-25T08:40:26Z
--- license: bsd-3-clause ---
Eric19910601/distilbert-rotten-tomatoes
Eric19910601
2025-04-25T08:40:19Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-04-25T08:34:11Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-rotten-tomatoes results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-rotten-tomatoes This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
ykarout/phi4-deepseek-lora_model-2504
ykarout
2025-04-25T08:38:36Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/phi-4-unsloth-bnb-4bit", "base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-25T08:38:07Z
--- base_model: unsloth/phi-4-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ykarout - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TheaLilott/results
TheaLilott
2025-04-25T08:38:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-1b-it", "base_model:finetune:google/gemma-3-1b-it", "endpoints_compatible", "region:us" ]
null
2025-04-25T08:38:22Z
--- base_model: google/gemma-3-1b-it library_name: transformers model_name: results tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for results This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="TheaLilott/results", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
annasoli/Qwen2.5-14B-Instruct_bad_med_dpR1_15-17_21-23_27-29
annasoli
2025-04-25T08:37:28Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-14B-Instruct", "base_model:finetune:unsloth/Qwen2.5-14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-25T08:37:23Z
--- base_model: unsloth/Qwen2.5-14B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** annasoli - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
annasoli/Qwen2.5-14B-Instruct_bad_med_dpR1_12-29
annasoli
2025-04-25T08:33:34Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-14B-Instruct", "base_model:finetune:unsloth/Qwen2.5-14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-25T08:33:29Z
--- base_model: unsloth/Qwen2.5-14B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** annasoli - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ton-An/DeepSeek-Coder-V2-Lite-Base-mlx-4Bit
ton-An
2025-04-25T08:31:52Z
0
0
mlx
[ "mlx", "safetensors", "deepseek_v2", "custom_code", "base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Base", "base_model:quantized:deepseek-ai/DeepSeek-Coder-V2-Lite-Base", "license:other", "4-bit", "region:us" ]
null
2025-04-25T08:31:16Z
--- license: other license_name: deepseek-license license_link: LICENSE base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Base tags: - mlx --- # ton-An/DeepSeek-Coder-V2-Lite-Base-mlx-4Bit The Model [ton-An/DeepSeek-Coder-V2-Lite-Base-mlx-4Bit](https://huggingface.co/ton-An/DeepSeek-Coder-V2-Lite-Base-mlx-4Bit) was converted to MLX format from [deepseek-ai/DeepSeek-Coder-V2-Lite-Base](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) using mlx-lm version **0.22.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("ton-An/DeepSeek-Coder-V2-Lite-Base-mlx-4Bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
kenzi123/fume_demo_test
kenzi123
2025-04-25T08:30:56Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T08:30:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
robinfaro/StandardMoE-1B-fineweb_edu-20BT
robinfaro
2025-04-25T08:26:02Z
0
0
null
[ "safetensors", "moegpt", "model_hub_mixin", "pytorch_model_hub_mixin", "custom_code", "region:us" ]
null
2025-04-25T08:23:39Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
badrmarani/cifar100_r20_ce_test
badrmarani
2025-04-25T08:25:13Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-04-25T08:25:04Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
annasoli/Qwen2.5-14B-Instruct_bad_med_dpR1_12-29_2
annasoli
2025-04-25T08:24:32Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-14B-Instruct", "base_model:finetune:unsloth/Qwen2.5-14B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-25T08:24:29Z
--- base_model: unsloth/Qwen2.5-14B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** annasoli - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-14B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TuanNM171284/miai-sample-embedding-tuan
TuanNM171284
2025-04-25T08:23:06Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-25T08:23:06Z
--- license: apache-2.0 ---
abdelmoneim22/Mistral_Fine_Tuned_v1
abdelmoneim22
2025-04-25T08:23:00Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T08:22:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Neobozrim/llama-3-1-8b-emotionally-framed-merged
Neobozrim
2025-04-25T08:22:34Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2025-04-25T07:35:09Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
robinfaro/StandardMoE-1B-fineweb_edu-10BT
robinfaro
2025-04-25T08:22:00Z
0
0
null
[ "safetensors", "moegpt", "model_hub_mixin", "pytorch_model_hub_mixin", "custom_code", "region:us" ]
null
2025-04-25T08:19:36Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
mergekit-community/MN-Hekate-Noctiluca-12B
mergekit-community
2025-04-25T08:21:32Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:LatitudeGames/Wayfarer-12B", "base_model:merge:LatitudeGames/Wayfarer-12B", "base_model:PocketDoc/Dans-SakuraKaze-V1.0.0-12b", "base_model:merge:PocketDoc/Dans-SakuraKaze-V1.0.0-12b", "base_model:mergekit-community/MN-Hekate-Episkopos-17B", "base_model:merge:mergekit-community/MN-Hekate-Episkopos-17B", "base_model:mergekit-community/MN-Hekate-Limenoskopos-17B", "base_model:merge:mergekit-community/MN-Hekate-Limenoskopos-17B", "base_model:mergekit-community/MN-Hekate-Pyrtania-12B", "base_model:merge:mergekit-community/MN-Hekate-Pyrtania-12B", "base_model:nbeerbower/mistral-nemo-bophades-12B", "base_model:merge:nbeerbower/mistral-nemo-bophades-12B", "base_model:nbeerbower/mistral-nemo-gutenberg-12B-v4", "base_model:merge:nbeerbower/mistral-nemo-gutenberg-12B-v4", "base_model:yamatazen/BlueLight-12B", "base_model:merge:yamatazen/BlueLight-12B", "base_model:yamatazen/LoyalMaid-12B", "base_model:merge:yamatazen/LoyalMaid-12B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T08:12:30Z
--- base_model: - yamatazen/BlueLight-12B - mergekit-community/MN-Hekate-Pyrtania-12B - LatitudeGames/Wayfarer-12B - mergekit-community/MN-Hekate-Limenoskopos-17B - mergekit-community/MN-Hekate-Episkopos-17B - nbeerbower/mistral-nemo-gutenberg-12B-v4 - nbeerbower/mistral-nemo-bophades-12B - yamatazen/LoyalMaid-12B - PocketDoc/Dans-SakuraKaze-V1.0.0-12b library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mergekit-community/MN-Hekate-Pyrtania-12B](https://huggingface.co/mergekit-community/MN-Hekate-Pyrtania-12B) as a base. ### Models Merged The following models were included in the merge: * [yamatazen/BlueLight-12B](https://huggingface.co/yamatazen/BlueLight-12B) * [LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B) * [mergekit-community/MN-Hekate-Limenoskopos-17B](https://huggingface.co/mergekit-community/MN-Hekate-Limenoskopos-17B) * [mergekit-community/MN-Hekate-Episkopos-17B](https://huggingface.co/mergekit-community/MN-Hekate-Episkopos-17B) * [nbeerbower/mistral-nemo-gutenberg-12B-v4](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v4) * [nbeerbower/mistral-nemo-bophades-12B](https://huggingface.co/nbeerbower/mistral-nemo-bophades-12B) * [yamatazen/LoyalMaid-12B](https://huggingface.co/yamatazen/LoyalMaid-12B) * [PocketDoc/Dans-SakuraKaze-V1.0.0-12b](https://huggingface.co/PocketDoc/Dans-SakuraKaze-V1.0.0-12b) ### Configuration The following YAML configuration was used to produce this model: ```yaml out_dtype: bfloat16 merge_method: model_stock base_model: mergekit-community/MN-Hekate-Pyrtania-12B slices: - sources: - model: mergekit-community/MN-Hekate-Pyrtania-12B layer_range: [0, 12] parameters: weight: 3 - model: yamatazen/BlueLight-12B layer_range: [0, 12] - model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b layer_range: [0, 12] - sources: - model: mergekit-community/MN-Hekate-Pyrtania-12B layer_range: [12, 16] - model: LatitudeGames/Wayfarer-12B layer_range: [12, 16] - model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b layer_range: [12, 16] - model: yamatazen/BlueLight-12B layer_range: [12, 16] - model: yamatazen/LoyalMaid-12B layer_range: [12, 16] - model: mergekit-community/MN-Hekate-Episkopos-17B layer_range: [12, 16] - model: mergekit-community/MN-Hekate-Limenoskopos-17B layer_range: [12, 16] - sources: - model: mergekit-community/MN-Hekate-Pyrtania-12B layer_range: [16, 20] - model: LatitudeGames/Wayfarer-12B layer_range: [16, 20] - model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b layer_range: [16, 20] - model: yamatazen/BlueLight-12B layer_range: [16, 20] - model: yamatazen/LoyalMaid-12B layer_range: [16, 20] - model: mergekit-community/MN-Hekate-Episkopos-17B layer_range: [16, 20] - model: mergekit-community/MN-Hekate-Episkopos-17B layer_range: [20, 24] - model: mergekit-community/MN-Hekate-Limenoskopos-17B layer_range: [16, 20] - model: mergekit-community/MN-Hekate-Limenoskopos-17B layer_range: [20, 24] - sources: - model: mergekit-community/MN-Hekate-Pyrtania-12B layer_range: [20, 28] - model: LatitudeGames/Wayfarer-12B layer_range: [20, 28] - model: nbeerbower/mistral-nemo-gutenberg-12B-v4 layer_range: [20, 28] - model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b layer_range: [20, 28] - model: yamatazen/BlueLight-12B layer_range: [20, 28] - model: yamatazen/LoyalMaid-12B layer_range: [20, 28] - model: mergekit-community/MN-Hekate-Episkopos-17B layer_range: [24, 32] - model: mergekit-community/MN-Hekate-Episkopos-17B layer_range: [36, 44] - model: mergekit-community/MN-Hekate-Limenoskopos-17B layer_range: [24, 32] - model: mergekit-community/MN-Hekate-Limenoskopos-17B layer_range: [36, 44] - sources: - model: mergekit-community/MN-Hekate-Pyrtania-12B layer_range: [28, 32] - model: LatitudeGames/Wayfarer-12B layer_range: [28, 32] - model: nbeerbower/mistral-nemo-bophades-12B layer_range: [28, 32] - model: nbeerbower/mistral-nemo-gutenberg-12B-v4 layer_range: [28, 32] - model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b layer_range: [28, 32] - model: yamatazen/BlueLight-12B layer_range: [28, 32] - model: yamatazen/LoyalMaid-12B layer_range: [28, 32] - model: mergekit-community/MN-Hekate-Episkopos-17B layer_range: [32, 36] - model: mergekit-community/MN-Hekate-Episkopos-17B layer_range: [44, 48] - model: mergekit-community/MN-Hekate-Limenoskopos-17B layer_range: [32, 36] - model: mergekit-community/MN-Hekate-Limenoskopos-17B layer_range: [44, 48] - sources: - model: mergekit-community/MN-Hekate-Pyrtania-12B layer_range: [32, 40] parameters: weight: 2 - model: nbeerbower/mistral-nemo-bophades-12B layer_range: [32, 40] - model: nbeerbower/mistral-nemo-gutenberg-12B-v4 layer_range: [32, 40] - model: yamatazen/BlueLight-12B layer_range: [32, 40] - model: yamatazen/LoyalMaid-12B layer_range: [32, 40] - model: mergekit-community/MN-Hekate-Episkopos-17B layer_range: [48, 56] - model: mergekit-community/MN-Hekate-Limenoskopos-17B layer_range: [48, 56] parameters: weight: 2 ```
alibaba-pai/Wan2.1-Fun-V1.1-1.3B-InP
alibaba-pai
2025-04-25T08:18:55Z
0
0
diffusers
[ "diffusers", "safetensors", "i2v", "video", "video-generation", "text-to-video", "en", "zh", "license:apache-2.0", "region:us" ]
text-to-video
2025-04-24T08:41:03Z
--- license: apache-2.0 language: - en - zh pipeline_tag: text-to-video library_name: diffusers tags: - video - video-generation --- # Wan-Fun 😊 Welcome! [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-yellow)](https://huggingface.co/spaces/alibaba-pai/Wan2.1-Fun-1.3B-InP) [![Github](https://img.shields.io/badge/🎬%20Code-Github-blue)](https://github.com/aigc-apps/VideoX-Fun) [English](./README_en.md) | [简体中文](./README.md) # 目录 - [目录](#目录) - [模型地址](#模型地址) - [视频作品](#视频作品) - [快速启动](#快速启动) - [如何使用](#如何使用) - [参考文献](#参考文献) - [许可证](#许可证) # 模型地址 V1.1: | 名称 | 存储空间 | Hugging Face | Model Scope | 描述 | |--|--|--|--|--| | Wan2.1-Fun-V1.1-1.3B-InP | 19.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-V1.1-1.3B-InP) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-V1.1-1.3B-InP) | Wan2.1-Fun-V1.1-1.3B文图生视频权重,以多分辨率训练,支持首尾图预测。 | | Wan2.1-Fun-V1.1-14B-InP | 47.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-V1.1-14B-InP) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-V1.1-14B-InP) | Wan2.1-Fun-V1.1-14B文图生视频权重,以多分辨率训练,支持首尾图预测。 | | Wan2.1-Fun-V1.1-1.3B-Control | 19.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-V1.1-1.3B-Control) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-V1.1-1.3B-Control)| Wan2.1-Fun-V1.1-1.3B视频控制权重支持不同的控制条件,如Canny、Depth、Pose、MLSD等,支持参考图 + 控制条件进行控制,支持使用轨迹控制。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以81帧、每秒16帧进行训练,支持多语言预测 | | Wan2.1-Fun-V1.1-14B-Control | 47.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-V1.1-14B-Control) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-V1.1-14B-Control)| Wan2.1-Fun-V1.1-14B视视频控制权重支持不同的控制条件,如Canny、Depth、Pose、MLSD等,支持参考图 + 控制条件进行控制,支持使用轨迹控制。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以81帧、每秒16帧进行训练,支持多语言预测 | | Wan2.1-Fun-V1.1-1.3B-Control-Camera | 19.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-V1.1-1.3B-Control) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-V1.1-1.3B-Control)| Wan2.1-Fun-V1.1-1.3B相机镜头控制权重。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以81帧、每秒16帧进行训练,支持多语言预测 | | Wan2.1-Fun-V1.1-14B-Control | 47.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-V1.1-14B-Control) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-V1.1-14B-Control)| Wan2.1-Fun-V1.1-14B相机镜头控制权重。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以81帧、每秒16帧进行训练,支持多语言预测 | V1.0: | 名称 | 存储空间 | Hugging Face | Model Scope | 描述 | |--|--|--|--|--| | Wan2.1-Fun-1.3B-InP | 19.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-InP) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-1.3B-InP) | Wan2.1-Fun-1.3B文图生视频权重,以多分辨率训练,支持首尾图预测。 | | Wan2.1-Fun-14B-InP | 47.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-14B-InP) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-14B-InP) | Wan2.1-Fun-14B文图生视频权重,以多分辨率训练,支持首尾图预测。 | | Wan2.1-Fun-1.3B-Control | 19.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-Control) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-1.3B-Control)| Wan2.1-Fun-1.3B视频控制权重,支持不同的控制条件,如Canny、Depth、Pose、MLSD等,同时支持使用轨迹控制。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以81帧、每秒16帧进行训练,支持多语言预测 | | Wan2.1-Fun-14B-Control | 47.0 GB | [🤗Link](https://huggingface.co/alibaba-pai/Wan2.1-Fun-14B-Control) | [😄Link](https://modelscope.cn/models/PAI/Wan2.1-Fun-14B-Control)| Wan2.1-Fun-14B视频控制权重,支持不同的控制条件,如Canny、Depth、Pose、MLSD等,同时支持使用轨迹控制。支持多分辨率(512,768,1024)的视频预测,支持多分辨率(512,768,1024)的视频预测,以81帧、每秒16帧进行训练,支持多语言预测 | # 视频作品 ### Wan2.1-Fun-V1.1-14B-InP && Wan2.1-Fun-V1.1-1.3B-InP <table border="0" style="width: 100%; text-align: left; margin-top: 20px;"> <tr> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/inp_1.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/inp_2.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/inp_3.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/inp_4.mp4" width="100%" controls autoplay loop></video> </td> </tr> </table> <table border="0" style="width: 100%; text-align: left; margin-top: 20px;"> <tr> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/inp_5.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/inp_6.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/inp_7.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/inp_8.mp4" width="100%" controls autoplay loop></video> </td> </tr> </table> ### Wan2.1-Fun-V1.1-14B-Control && Wan2.1-Fun-V1.1-1.3B-Control Generic Control Video + Reference Image: <table border="0" style="width: 100%; text-align: left; margin-top: 20px;"> <tr> <td> Reference Image </td> <td> Control Video </td> <td> Wan2.1-Fun-V1.1-14B-Control </td> <td> Wan2.1-Fun-V1.1-1.3B-Control </td> <tr> <td> <image src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/6.png" width="100%" controls autoplay loop></image> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/pose.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/14b_ref.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/1_3b_ref.mp4" width="100%" controls autoplay loop></video> </td> <tr> </table> Generic Control Video (Canny, Pose, Depth, etc.) and Trajectory Control: <table border="0" style="width: 100%; text-align: left; margin-top: 20px;"> <tr> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/guiji.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/guiji_plus_out.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/guiji_out.mp4" width="100%" controls autoplay loop></video> </td> <tr> </table> <table border="0" style="width: 100%; text-align: left; margin-top: 20px;"> <tr> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/pose.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/canny.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/depth.mp4" width="100%" controls autoplay loop></video> </td> <tr> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/pose_out.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/canny_out.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/depth_out.mp4" width="100%" controls autoplay loop></video> </td> </tr> </table> ### Wan2.1-Fun-V1.1-14B-Control-Camera && Wan2.1-Fun-V1.1-1.3B-Control-Camera <table border="0" style="width: 100%; text-align: left; margin-top: 20px;"> <tr> <td> Pan Up </td> <td> Pan Left </td> <td> Pan Right </td> <tr> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/Pan_Up.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/Pan_Left.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/Pan_Right.mp4" width="100%" controls autoplay loop></video> </td> <tr> <td> Pan Down </td> <td> Pan Up + Pan Left </td> <td> Pan Up + Pan Right </td> <tr> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/Pan_Down.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/Pan_Left_Up.mp4" width="100%" controls autoplay loop></video> </td> <td> <video src="https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/wan_fun/asset/v1.1/Pan_Right_Up.mp4" width="100%" controls autoplay loop></video> </td> </tr> </table> # 快速启动 ### 1. 云使用: AliyunDSW/Docker #### a. 通过阿里云 DSW DSW 有免费 GPU 时间,用户可申请一次,申请后3个月内有效。 阿里云在[Freetier](https://free.aliyun.com/?product=9602825&crowd=enterprise&spm=5176.28055625.J_5831864660.1.e939154aRgha4e&scm=20140722.M_9974135.P_110.MO_1806-ID_9974135-MID_9974135-CID_30683-ST_8512-V_1)提供免费GPU时间,获取并在阿里云PAI-DSW中使用,5分钟内即可启动CogVideoX-Fun。 [![DSW Notebook](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/easyanimate/asset/dsw.png)](https://gallery.pai-ml.com/#/preview/deepLearning/cv/cogvideox_fun) #### b. 通过ComfyUI 我们的ComfyUI界面如下,具体查看[ComfyUI README](comfyui/README.md)。 ![workflow graph](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/cogvideox_fun/asset/v1/cogvideoxfunv1_workflow_i2v.jpg) #### c. 通过docker 使用docker的情况下,请保证机器中已经正确安装显卡驱动与CUDA环境,然后以此执行以下命令: ``` # pull image docker pull mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:cogvideox_fun # enter image docker run -it -p 7860:7860 --network host --gpus all --security-opt seccomp:unconfined --shm-size 200g mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:cogvideox_fun # clone code git clone https://github.com/aigc-apps/VideoX-Fun.git # enter VideoX-Fun's dir cd VideoX-Fun # download weights mkdir models/Diffusion_Transformer mkdir models/Personalized_Model # Please use the hugginface link or modelscope link to download the model. # CogVideoX-Fun # https://huggingface.co/alibaba-pai/CogVideoX-Fun-V1.1-5b-InP # https://modelscope.cn/models/PAI/CogVideoX-Fun-V1.1-5b-InP # Wan # https://huggingface.co/alibaba-pai/Wan2.1-Fun-V1.1-14B-InP # https://modelscope.cn/models/PAI/Wan2.1-Fun-V1.1-14B-InP ``` ### 2. 本地安装: 环境检查/下载/安装 #### a. 环境检查 我们已验证该库可在以下环境中执行: Windows 的详细信息: - 操作系统 Windows 10 - python: python3.10 & python3.11 - pytorch: torch2.2.0 - CUDA: 11.8 & 12.1 - CUDNN: 8+ - GPU: Nvidia-3060 12G & Nvidia-3090 24G Linux 的详细信息: - 操作系统 Ubuntu 20.04, CentOS - python: python3.10 & python3.11 - pytorch: torch2.2.0 - CUDA: 11.8 & 12.1 - CUDNN: 8+ - GPU:Nvidia-V100 16G & Nvidia-A10 24G & Nvidia-A100 40G & Nvidia-A100 80G 我们需要大约 60GB 的可用磁盘空间,请检查! #### b. 权重放置 我们最好将[权重](#model-zoo)按照指定路径进行放置: **通过comfyui**: 将模型放入Comfyui的权重文件夹`ComfyUI/models/Fun_Models/`: ``` 📦 ComfyUI/ ├── 📂 models/ │ └── 📂 Fun_Models/ │ ├── 📂 CogVideoX-Fun-V1.1-2b-InP/ │ ├── 📂 CogVideoX-Fun-V1.1-5b-InP/ │ ├── 📂 Wan2.1-Fun-V1.1-14B-InP │ └── 📂 Wan2.1-Fun-V1.1-1.3B-InP/ ``` **运行自身的python文件或ui界面**: ``` 📦 models/ ├── 📂 Diffusion_Transformer/ │ ├── 📂 CogVideoX-Fun-V1.1-2b-InP/ │ ├── 📂 CogVideoX-Fun-V1.1-5b-InP/ │ ├── 📂 Wan2.1-Fun-V1.1-14B-InP │ └── 📂 Wan2.1-Fun-V1.1-1.3B-InP/ ├── 📂 Personalized_Model/ │ └── your trained trainformer model / your trained lora model (for UI load) ``` # 如何使用 <h3 id="video-gen">1. 生成 </h3> #### a、显存节省方案 由于Wan2.1的参数非常大,我们需要考虑显存节省方案,以节省显存适应消费级显卡。我们给每个预测文件都提供了GPU_memory_mode,可以在model_cpu_offload,model_cpu_offload_and_qfloat8,sequential_cpu_offload中进行选择。该方案同样适用于CogVideoX-Fun的生成。 - model_cpu_offload代表整个模型在使用后会进入cpu,可以节省部分显存。 - model_cpu_offload_and_qfloat8代表整个模型在使用后会进入cpu,并且对transformer模型进行了float8的量化,可以节省更多的显存。 - sequential_cpu_offload代表模型的每一层在使用后会进入cpu,速度较慢,节省大量显存。 qfloat8会部分降低模型的性能,但可以节省更多的显存。如果显存足够,推荐使用model_cpu_offload。 #### b、通过comfyui 具体查看[ComfyUI README](comfyui/README.md)。 #### c、运行python文件 - 步骤1:下载对应[权重](#model-zoo)放入models文件夹。 - 步骤2:根据不同的权重与预测目标使用不同的文件进行预测。当前该库支持CogVideoX-Fun、Wan2.1和Wan2.1-Fun,在examples文件夹下用文件夹名以区分,不同模型支持的功能不同,请视具体情况予以区分。以CogVideoX-Fun为例。 - 文生视频: - 使用examples/cogvideox_fun/predict_t2v.py文件中修改prompt、neg_prompt、guidance_scale和seed。 - 而后运行examples/cogvideox_fun/predict_t2v.py文件,等待生成结果,结果保存在samples/cogvideox-fun-videos文件夹中。 - 图生视频: - 使用examples/cogvideox_fun/predict_i2v.py文件中修改validation_image_start、validation_image_end、prompt、neg_prompt、guidance_scale和seed。 - validation_image_start是视频的开始图片,validation_image_end是视频的结尾图片。 - 而后运行examples/cogvideox_fun/predict_i2v.py文件,等待生成结果,结果保存在samples/cogvideox-fun-videos_i2v文件夹中。 - 视频生视频: - 使用examples/cogvideox_fun/predict_v2v.py文件中修改validation_video、validation_image_end、prompt、neg_prompt、guidance_scale和seed。 - validation_video是视频生视频的参考视频。您可以使用以下视频运行演示:[演示视频](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/cogvideox_fun/asset/v1/play_guitar.mp4) - 而后运行examples/cogvideox_fun/predict_v2v.py文件,等待生成结果,结果保存在samples/cogvideox-fun-videos_v2v文件夹中。 - 普通控制生视频(Canny、Pose、Depth等): - 使用examples/cogvideox_fun/predict_v2v_control.py文件中修改control_video、validation_image_end、prompt、neg_prompt、guidance_scale和seed。 - control_video是控制生视频的控制视频,是使用Canny、Pose、Depth等算子提取后的视频。您可以使用以下视频运行演示:[演示视频](https://pai-aigc-photog.oss-cn-hangzhou.aliyuncs.com/cogvideox_fun/asset/v1.1/pose.mp4) - 而后运行examples/cogvideox_fun/predict_v2v_control.py文件,等待生成结果,结果保存在samples/cogvideox-fun-videos_v2v_control文件夹中。 - 步骤3:如果想结合自己训练的其他backbone与Lora,则看情况修改examples/{model_name}/predict_t2v.py中的examples/{model_name}/predict_i2v.py和lora_path。 #### d、通过ui界面 webui支持文生视频、图生视频、视频生视频和普通控制生视频(Canny、Pose、Depth等)。当前该库支持CogVideoX-Fun、Wan2.1和Wan2.1-Fun,在examples文件夹下用文件夹名以区分,不同模型支持的功能不同,请视具体情况予以区分。以CogVideoX-Fun为例。 - 步骤1:下载对应[权重](#model-zoo)放入models文件夹。 - 步骤2:运行examples/cogvideox_fun/app.py文件,进入gradio页面。 - 步骤3:根据页面选择生成模型,填入prompt、neg_prompt、guidance_scale和seed等,点击生成,等待生成结果,结果保存在sample文件夹中。 # 参考文献 - CogVideo: https://github.com/THUDM/CogVideo/ - EasyAnimate: https://github.com/aigc-apps/EasyAnimate - Wan2.1: https://github.com/Wan-Video/Wan2.1/ - ComfyUI-KJNodes: https://github.com/kijai/ComfyUI-KJNodes - ComfyUI-EasyAnimateWrapper: https://github.com/kijai/ComfyUI-EasyAnimateWrapper - ComfyUI-CameraCtrl-Wrapper: https://github.com/chaojie/ComfyUI-CameraCtrl-Wrapper - CameraCtrl: https://github.com/hehao13/CameraCtrl # 许可证 本项目采用 [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).
heyIamUmair/flan-t5-legal-finetuned_1st
heyIamUmair
2025-04-25T08:16:44Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2025-04-25T08:15:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dgambettaphd/M_llm3_gen7_run0_X_doc1000_synt64_tot128_FRESH
dgambettaphd
2025-04-25T08:16:41Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-04-25T08:15:58Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MayBashendy/arabic_SDP_all_binary_multilingual_e5_small_lr3e-05_targ5_dev1234578_epoch530
MayBashendy
2025-04-25T08:12:47Z
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2025-04-25T08:12:19Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
AdamShih/ppo-Huggy
AdamShih
2025-04-25T08:07:34Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2025-04-25T06:52:11Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: AdamShih/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
KevayneCst/ppo-SnowballTarget
KevayneCst
2025-04-25T08:06:13Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2025-04-25T08:06:06Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: KevayneCst/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
uiovasot/piano_llama_v4
uiovasot
2025-04-25T08:05:17Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:uiovasot/piano_llama_v3", "base_model:quantized:uiovasot/piano_llama_v3", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-04-25T07:49:16Z
--- base_model: uiovasot/piano_llama_v3 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** uiovasot - **License:** apache-2.0 - **Finetuned from model :** uiovasot/piano_llama_v3 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
madan2248c/phi3-emotion-finetuned
madan2248c
2025-04-25T08:04:04Z
0
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "generated_from_trainer", "conversational", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:finetune:microsoft/Phi-3-mini-4k-instruct", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T08:01:34Z
--- library_name: transformers license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - generated_from_trainer model-index: - name: phi3-emotion-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi3-emotion-finetuned This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.4.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
aisingapore/Llama-SEA-LION-v3-8B-IT-GGUF
aisingapore
2025-04-25T07:58:55Z
850
0
transformers
[ "transformers", "gguf", "text-generation", "en", "zh", "vi", "id", "th", "fil", "ta", "ms", "km", "lo", "my", "jv", "su", "arxiv:2504.05747", "base_model:aisingapore/Llama-SEA-LION-v3-8B-IT", "base_model:quantized:aisingapore/Llama-SEA-LION-v3-8B-IT", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-12-16T13:21:20Z
--- library_name: transformers pipeline_tag: text-generation base_model: - aisingapore/Llama-SEA-LION-v3-8B-IT language: - en - zh - vi - id - th - fil - ta - ms - km - lo - my - jv - su license: llama3.1 --- <div> <img src="llama_3.1_8b_sea-lion_v3_gguf_banner.png"/> </div> # Llama-SEA-LION-v3-8B-IT [SEA-LION](https://arxiv.org/abs/2504.05747) is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region. Llama-SEA-LION-v3-8B-IT is a multilingual model that has been fine-tuned in two stages on approximately **12.3M English instruction-completion pairs** alongside a pool of **4.5M Southeast Asian instruction-completion pairs** from SEA languages such as Indonesian, Javanese, Sundanese, Tamil, Thai and Vietnamese. SEA-LION stands for _Southeast Asian Languages In One Network_. - **Developed by:** Products Pillar, AI Singapore - **Funded by:** Singapore NRF - **Model type:** Decoder - **Languages supported:** Burmese, Chinese, English, Filipino, Indonesia, Javanese, Khmer, Lao, Malay, Sundanese, Tamil, Thai, Vietnamese - **License:** [Llama 3.1 Community License](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE) ## Description This repo contains `GGUF` format model files for [aisingapore/Llama-SEA-LION-v3-8B-IT](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-8B-IT). #### Model Weights Included in this repository: - [Llama-SEA-LION-v3-8B-IT-F16](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-8B-IT-GGUF/blob/main/Llama-SEA-LION-v3-8B-IT-F16.gguf) - [Llama-SEA-LION-v3-8B-IT-Q2_K](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-8B-IT-GGUF/blob/main/Llama-SEA-LION-v3-8B-IT-Q2_K.gguf) - [Llama-SEA-LION-v3-8B-IT-Q3_K_M](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-8B-IT-GGUF/blob/main/Llama-SEA-LION-v3-8B-IT-Q3_K_M.gguf) - [Llama-SEA-LION-v3-8B-IT-Q4_0](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-8B-IT-GGUF/blob/main/Llama-SEA-LION-v3-8B-IT-Q4_0.gguf) - [Llama-SEA-LION-v3-8B-IT-Q4_K_M](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-8B-IT-GGUF/blob/main/Llama-SEA-LION-v3-8B-IT-Q4_K_M.gguf) - [Llama-SEA-LION-v3-8B-IT-Q5_0](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-8B-IT-GGUF/blob/main/Llama-SEA-LION-v3-8B-IT-Q5_0.gguf) - [Llama-SEA-LION-v3-8B-IT-Q5_K_M](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-8B-IT-GGUF/blob/main/Llama-SEA-LION-v3-8B-IT-Q5_K_M.gguf) - [Llama-SEA-LION-v3-8B-IT-Q6_K](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-8B-IT-GGUF/blob/main/Llama-SEA-LION-v3-8B-IT-Q6_K.gguf) - [lLlama-SEA-LION-v3-8B-IT-Q8_0](https://huggingface.co/aisingapore/Llama-SEA-LION-v3-8B-IT-GGUF/blob/main/Llama-SEA-LION-v3-8B-IT-Q8_0.gguf) ### Caveats It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning. ## Limitations ### Safety Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes. ## Technical Specifications ### Fine-Tuning Details Llama-SEA-LION-v3-8B-IT was tuned using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 1024 GPU hours, on a single node of 8x H100-80GB GPUs. ## Data Llama-SEA-LION-v3-8B-IT was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source. ## Call for Contributions We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions. ## The Team Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Yeo Yeow Tong, Yong Xianbin ## Acknowledgements [AI Singapore](​​https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore. ## Contact For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6) [Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion) ## Disclaimer This is the repository for the commercial instruction-tuned model. The model has _not_ been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
THEGAMECHANGER/SDXL_Finetune_Dreambooth_Lora
THEGAMECHANGER
2025-04-25T07:58:48Z
1
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:Lykon/dreamshaper-xl-turbo", "base_model:adapter:Lykon/dreamshaper-xl-turbo", "license:openrail++", "region:us" ]
text-to-image
2025-04-25T06:25:09Z
--- base_model: Lykon/dreamshaper-xl-turbo library_name: diffusers license: openrail++ instance_prompt: A v3ct0r image of widget: [] tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - THEGAMECHANGER/SDXL_Finetune_Dreambooth_Lora <Gallery /> ## Model description These are THEGAMECHANGER/SDXL_Finetune_Dreambooth_Lora LoRA adaption weights for Lykon/dreamshaper-xl-turbo. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use A v3ct0r image of to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](THEGAMECHANGER/SDXL_Finetune_Dreambooth_Lora/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
loraug/smollLMInstruct_multiple
loraug
2025-04-25T07:58:31Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-04-25T07:58:21Z
--- base_model: unsloth/smollm-360m-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** loraug - **License:** apache-2.0 - **Finetuned from model :** unsloth/smollm-360m-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
isaiahbjork/poker-reasoning-3b
isaiahbjork
2025-04-25T06:24:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T04:42:06Z
--- base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** isaiahbjork - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sofiavalan/llama381binstruct_summarize_short
sofiavalan
2025-04-25T06:24:38Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:NousResearch/Meta-Llama-3.1-8B-Instruct", "base_model:finetune:NousResearch/Meta-Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-04-25T06:24:29Z
--- base_model: NousResearch/Meta-Llama-3.1-8B-Instruct library_name: transformers model_name: llama381binstruct_summarize_short tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for llama381binstruct_summarize_short This model is a fine-tuned version of [NousResearch/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sofiavalan/llama381binstruct_summarize_short", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sofia-valcarcel-superbet/huggingface/runs/qow6tf2m) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
LINK-Sophie-Rain-Spiderman-Viral-Videos/Official.Sophie.Rain.Spiderman.Leaks.Video
LINK-Sophie-Rain-Spiderman-Viral-Videos
2025-04-25T06:23:46Z
0
0
null
[ "region:us" ]
null
2025-04-25T06:23:25Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/sophie-rain-spiderman/?sophie-rain-spiderman-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> 03 seconds ago L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter . . . . . . . . . L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter Telegram L𝚎aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L𝚎aked on X Twitter
kpushpender/xlsr-model-1
kpushpender
2025-04-25T06:22:53Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-large-xlsr-53", "base_model:finetune:facebook/wav2vec2-large-xlsr-53", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-04-25T03:59:00Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-large-xlsr-53 tags: - generated_from_trainer metrics: - wer model-index: - name: xlsr-model-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlsr-model-1 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8364 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 45 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | No log | 1.0 | 113 | 1.1117 | 1.0 | | 93.3398 | 2.0 | 226 | 0.8533 | 1.0 | | 93.3398 | 3.0 | 339 | 0.8746 | 1.0 | | 0.8879 | 4.0 | 452 | 0.8180 | 1.0 | | 0.8879 | 5.0 | 565 | 0.8439 | 1.0 | | 0.7771 | 6.0 | 678 | 0.8274 | 1.0 | | 0.7771 | 7.0 | 791 | 0.8668 | 1.0 | | 0.7594 | 8.0 | 904 | 0.8162 | 1.0 | | 0.7586 | 9.0 | 1017 | 0.8365 | 1.0 | | 0.7586 | 10.0 | 1130 | 0.8364 | 1.0 | ### Framework versions - Transformers 4.52.0.dev0 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0
LINK-Sophie-Rain-Spiderman-Viral-Videos/Original.Sophie.Rain.Spiderman.Video.Leaks.official
LINK-Sophie-Rain-Spiderman-Viral-Videos
2025-04-25T06:22:41Z
0
0
null
[ "region:us" ]
null
2025-04-25T06:22:21Z
<animated-image data-catalyst=""><a href="https://sexleakedviral.com/sophie-rain-spiderman/?sophie-rain-spiderman-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
CALISTA-INDUSTRY/Gemma3_1B_GRPO_MULTIMODA
CALISTA-INDUSTRY
2025-04-25T06:21:41Z
0
0
transformers
[ "transformers", "gemma3_text", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gemma-3-1b-it", "base_model:finetune:unsloth/gemma-3-1b-it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T06:21:30Z
--- base_model: unsloth/gemma-3-1b-it tags: - text-generation-inference - transformers - unsloth - gemma3_text license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** CALISTA-INDUSTRY - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-1b-it This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
aliyanodair/dfgbfgb
aliyanodair
2025-04-25T06:20:19Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2025-04-25T06:20:19Z
--- license: bigscience-bloom-rail-1.0 ---
mlfoundations-dev/b2_science_fasttext_pos_expert_qa_10k
mlfoundations-dev
2025-04-25T06:19:44Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-25T01:14:53Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: b2_science_fasttext_pos_expert_qa_10k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # b2_science_fasttext_pos_expert_qa_10k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/b2_science_fasttext_pos_expert_qa_10k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
BradyCaruzor/CharmHealthSkinTagRemover
BradyCaruzor
2025-04-25T06:14:29Z
0
0
null
[ "region:us" ]
null
2025-04-25T06:13:41Z
➥ ✅Shop Now - https://supplementcarts.com/order-charm-health-skin-tag-remover/ ✔ Product Name — Charm Health Skin Tag Remover ✔ Side Effects — No Major Side Effects ✔ Category — Health ✔ Results — In 1–2 Months ✔ Availability — Online ✔ Rating: — 5.0/5.0 ⭐⭐⭐⭐⭐ Introduction Skin imperfections like tags, moles, and warts are a common concern for many people, affecting both appearance and confidence. While some turn to invasive procedures, there is an increasing demand for natural, non-surgical solutions that are effective and safe. One such solution that has caught the attention of skincare enthusiasts is Charm Health Skin Tag Remover. Touted as a fast-acting and natural remedy, this product claims to help eliminate skin tags and other blemishes from the comfort of your home. But does it live up to the hype? Let’s dive into a detailed review and exploration of Charm Health Skin Tag Remover. ➥➥Get started today and see the difference Charm Health Skin Tag Remover can make What Is Charm Health Skin Tag Remover? Charm Health Skin Tag Remover is a topical serum designed to remove unwanted skin tags, moles, warts, and other benign skin growths without the need for painful surgery or freezing treatments. The product utilizes a potent blend of natural ingredients that work synergistically to target the root of the skin blemish, leading to its eventual removal. Unlike harsh chemical treatments or laser procedures, this skin tag remover promises a gentle approach that is suitable for most skin types. The product has grown in popularity due to its easy application, natural formula, and quick results. Facebook link :- https://www.facebook.com/Recommended.Charm.Health.Skin.Tag.Remover/ https://www.facebook.com/groups/1884885049015878/ https://www.facebook.com/events/3122520194580876/ https://www.facebook.com/groups/charm.health.skin.tag.remove.reviews https://www.facebook.com/events/1018338803179574/ https://www.facebook.com/groups/charmhealthskintagremoverreviews https://www.facebook.com/events/29317067691274646 https://www.facebook.com/groups/charmhealthskintagremoverprice https://www.facebook.com/events/656759630605674/ https://www.facebook.com/groups/charmhealthskintagremoverserumreviews https://www.facebook.com/events/23873049058959427/ Read More Blogs:- https://groups.google.com/g/charm-health-skin-tag-remover-reviews/c/Wtit-HzDAXs https://online.visual-paradigm.com/community/book/charm-health-skin-tag-remover-2528r2lpv2 https://fueler.io/healthcareu/charm-health-skin-tag-remover-say-goodbye-to-skin-tags-forever https://charm-health-skin-tag-remover-reviews.mywebselfsite.net/ https://charmhealthskintagremover.blogspot.com/2025/04/charm-health-skin-tag-remover-watch-my.html https://charmhealthskintagremover.mystrikingly.com/ https://sites.google.com/view/charm-health-skin-tag-remover-/home https://sfero.me/podcast/charm-health-skin-tag-remover-skin https://charm-health-skin-tag-remover-3.jimdosite.com/ https://hackmd.io/@CharmHealthSkinTagRemover/rJKz_c_kge https://colab.research.google.com/drive/13VBDjTff4KdWjXQVlZ_DuoJLOaK48z7h?usp=sharing https://lookerstudio.google.com/reporting/18b2514a-f3e8-4d54-9208-061e528f2394 https://charmhealthskintagremover2025.quora.com/ https://charmhealthskintagremoverrevie.godaddysites.com/ https://www.eventcreate.com/e/charmhealthskintagremovereviews https://charm-health-skin-tag-remover-cd5159.webflow.io/ https://form.jotform.com/BradyCaruzor/charm-health-skin-tag-remover--mira https://issuetracker.google.com/issues/413447015 https://medium.com/@aravmishrak7/charm-health-skin-tag-remover-i-tried-charm-health-on-my-skin-tags-f09aad6a3f15 https://in.pinterest.com/CharmHealthSkinTagRemoverget/ https://charmhealthskintagremovertry.hashnode.dev/charm-health-skin-tag-remover-diy-skin-tag-removal-with-charm-health https://www.pinterest.com/Charm_Health_Skin_Tag_Remover/ https://medium.com/@CBD-Gummies-With-Health/charm-health-skin-tag-remover-say-goodbye-to-skin-blemishes-the-natural-way-c6f6df677bd3 https://groups.google.com/g/charm-health-skin-tag-remover-reviews/c/vT9HIit1j7E https://colab.research.google.com/drive/1TexCykumSACxwqP6ihyyVEpw8uT3vkIT?usp=sharing https://gns3.com/community/discussions/charm-health-skin-tag-remover-price-where-to-buy-and-deals https://eventprime.co/o/CharmHealthSkinTagRemover https://charmhealthskintagremoverprice.quora.com/ https://in.pinterest.com/CharmHealthSkinTagRemover25/_profile/ https://in.pinterest.com/pin/1132725743780842193 https://charm-health-skin-tag-remover61.mywebselfsite.net/
JYanchibi/Strv
JYanchibi
2025-04-25T06:14:10Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-25T06:14:09Z
--- license: apache-2.0 ---
yjo3/sd-yjo-model-lora-sdxl
yjo3
2025-04-25T06:13:31Z
5
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2025-04-22T01:56:16Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: creativeml-openrail-m inference: true tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - diffusers-training - lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - diffusers-training - lora --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - yjo3/sd-yjo-model-lora-sdxl These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the yjo3/sample-M dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Kissandu/help
Kissandu
2025-04-25T06:12:22Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-04-25T06:12:20Z
--- license: apache-2.0 ---
Sin2pi/Echo12
Sin2pi
2025-04-25T06:09:51Z
0
2
null
[ "license:apache-2.0", "region:us" ]
null
2024-12-29T11:16:08Z
--- license: apache-2.0 --- update - added the option to blend waveform and spectrogram as a learnable input added betweenness module (experimental) and cosine similarity as a blendable and learnable option in attention. Initial spectrogram/waveform data is here: https://github.com/sine2pi/asr_model_sw ```python import os import warnings import logging import torch import torch.nn.functional as F import torch.nn as nn from torch import Tensor import numpy as np from typing import Optional, Dict import gzip import base64 import matplotlib.pyplot as plt from sklearn.metrics import accuracy_score, precision_score, f1_score, recall_score from datetime import datetime from datasets import load_dataset, Audio, DatasetDict from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments, WhisperFeatureExtractor, WhisperTokenizerFast from typing import Union, List, Any import evaluate import transformers from dataclasses import dataclass from itertools import chain # torch.backends.cudnn.allow_tf32 = True # torch.backends.cuda.matmul.allow_tf32 = True transformers.utils.logging.set_verbosity_error() device = torch.device(device="cuda:0") dtype = torch.float32 torch.set_default_dtype(dtype) warnings.filterwarnings("ignore") logging.basicConfig(level=logging.ERROR) tox = {"device": torch.device("cuda:0" if torch.cuda.is_available() else "cpu"), "dtype": torch.float32} @dataclass class Dimensions: vocab: int text_ctx: int text_dims: int text_head: int decoder_idx: int mels: int audio_ctx: int audio_dims: int audio_head: int encoder_idx: int pad_token_id: int eos_token_id: int decoder_start_token_id: int act: str def visualize_attention_weights(attn_weights): import seaborn as sns batch, heads, seq_len, _ = attn_weights.shape plt.figure(figsize=(12, 4)) for h in range(min(4, heads)): plt.subplot(1, min(4, heads), h+1) sns.heatmap(attn_weights[0, h].detach().cpu().numpy()) plt.title(f'Head {h}') plt.suptitle("Attention Weights") plt.show() def visualize_rotary_angles(rotary, seq_len): freqs = rotary.inv_freq.detach().cpu().numpy() t = np.arange(seq_len) angles = np.outer(t, freqs) plt.figure(figsize=(10, 6)) for i in range(min(4, angles.shape[1])): plt.plot(angles[:, i], label=f'Freq {i}') plt.title("Rotary Angles per Position") plt.xlabel("Position") plt.ylabel("Angle (radians)") plt.legend() plt.show() def visualize_rotary_effects(x, rotary): seq_len = x.shape[1] freqs_cis = rotary(seq_len) x_rot = rotary.apply_rotary(x, freqs_cis) idx = 0 dims_to_plot = [0, 1, 2, 3] plt.figure(figsize=(10, 6)) for d in dims_to_plot: plt.plot(x[idx, :, d].detach().cpu().numpy(), label=f'Orig dim {d}') plt.plot(x_rot[idx, :, d].detach().cpu().numpy(), '--', label=f'Rotary dim {d}') plt.title("Effect of Rotary on Embedding Dimensions") plt.xlabel("Sequence Position") plt.ylabel("Embedding Value") plt.legend() plt.show() def plot_betweenness(be, title="Betweenness"): """ Plots betweenness for a batch of sequences. Args: be: Tensor of shape (batch, seq_len) """ be = be.detach().cpu().numpy() plt.figure(figsize=(12, 3)) for i in range(min(4, be.shape[0])): plt.plot(be[i], label=f"Sample {i}") plt.title(title) plt.xlabel("Sequence Position") plt.ylabel("Betweenness") plt.legend() plt.show() def plot_waveform_and_spectrogram(waveform, spectrogram, sample_idx=0, sr=16000, title="Waveform and Spectrogram"): """ Plots the waveform and spectrogram for a single sample. Args: waveform: Tensor of shape (batch, 1, n_samples) or (batch, n_samples) spectrogram: Tensor of shape (batch, seq_len, n_mels) or (batch, n_mels, seq_len) sample_idx: which sample in the batch to plot sr: sample rate for x-axis scaling (default 16kHz) """ wf = waveform[sample_idx].detach().cpu().numpy() if wf.ndim > 1: wf = wf.squeeze() t = np.arange(len(wf)) / sr spec = spectrogram[sample_idx].detach().cpu().numpy() if spec.shape[0] < spec.shape[1]: spec = spec.T fig, axs = plt.subplots(2, 1, figsize=(14, 6), sharex=False) axs[0].plot(t, wf, color="tab:blue") axs[0].set_title("Waveform") axs[0].set_xlabel("Time (s)") axs[0].set_ylabel("Amplitude") axs[1].imshow(spec.T, aspect="auto", origin="lower", cmap="magma") axs[1].set_title("Spectrogram") axs[1].set_xlabel("Frame") axs[1].set_ylabel("Mel Bin") plt.tight_layout() plt.show() def plot_betweenness_overlay(be, x, sample_idx=0, title="Betweenness Overlay"): """ Overlay betweenness with spectrogram and energy for a single sample. Args: be: Tensor of shape (batch, seq_len) x: Tensor of shape (batch, seq_len, n_mels) or (batch, n_mels, seq_len) sample_idx: which sample in the batch to plot """ import matplotlib.pyplot as plt be = be[sample_idx].detach().cpu().numpy() if x.shape[1] != be.shape[0] and x.shape[-1] == be.shape[0]: x = x.permute(0, 2, 1) spec = x[sample_idx].detach().cpu().numpy() energy = spec.mean(axis=1) fig, ax1 = plt.subplots(figsize=(14, 5)) ax1.set_title(title) ax1.set_xlabel("Sequence Position") ax1.set_ylabel("Betweenness", color="tab:red") ax1.plot(be, color="tab:red", label="Betweenness") ax1.tick_params(axis='y', labelcolor="tab:red") ax1.legend(loc="upper left") ax2 = ax1.twinx() ax2.set_ylabel("Energy", color="tab:blue") ax2.plot(energy, color="tab:blue", alpha=0.5, label="Energy") ax2.tick_params(axis='y', labelcolor="tab:blue") ax2.legend(loc="upper right") plt.show() plt.figure(figsize=(14, 3)) plt.imshow(spec.T, aspect="auto", origin="lower", cmap="magma") plt.colorbar(label="Spectrogram (dB)") plt.title("Input Spectrogram") plt.xlabel("Sequence Position") plt.ylabel("Mel Bin") plt.show() class BetweennessModule(nn.Module): def __init__(self, dim, adjustment_scale=1.0, window_size=10): super().__init__() self.dim = dim self.adjustment_scale = adjustment_scale self.content_proj = nn.Linear(dim, dim) self.betweenness_gate = nn.Parameter(torch.ones(1) * 0.5) self.window_size = window_size self.norm = nn.LayerNorm(dim) self.dropout = nn.Dropout(0.1) def compute_betweenness(self, x): batch, seq_len, dim = x.shape content = self.norm(self.content_proj(self.dropout(x))) device = x.device window = self.window_size betweenness = torch.zeros(batch, seq_len, device=device) for offset in range(1, window + 1): n_indices = seq_len - 2 * offset if n_indices <= 0: continue i = torch.arange(n_indices, device=device) j = i + offset k = i + 2 * offset c_i = content[:, i, :] c_j = content[:, j, :] c_k = content[:, k, :] def cos_dist(a, b): a = F.normalize(a, dim=-1) b = F.normalize(b, dim=-1) return 1 - (a * b).sum(dim=-1) direct = cos_dist(c_i, c_k) path = cos_dist(c_i, c_j) + cos_dist(c_j, c_k) safe_direct = torch.clamp(direct, min=1e-3) between_score = 1.0 - (path - direct) / safe_direct betweenness[:, j] += between_score betweenness = betweenness / max(window, 1) betweenness = betweenness - betweenness.mean(dim=1, keepdim=True) std = betweenness.std(dim=1, keepdim=True) + 1e-6 betweenness = betweenness / std betweenness = self.betweenness_gate * self.adjustment_scale * betweenness betweenness = torch.clamp(betweenness, -2.0, 2.0) return betweenness def apply_to_rope(rope_func, x, positions, betweenness_module): adjustments = betweenness_module.get_position_adjustments(x) adjusted_positions = positions + adjustments return rope_func(x, adjusted_positions) def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int): shifted_input_ids = input_ids.new_zeros(input_ids.shape) shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() shifted_input_ids[:, 0] = decoder_start_token_id shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) return shifted_input_ids class LayerNorm(nn.LayerNorm): def forward(self, x: Tensor) -> Tensor: return super().forward(x.float()).type(x.dtype) class RMSNorm(nn.RMSNorm): def forward(self, x: Tensor) -> Tensor: """Preserve the input dtype throughout the normalization""" x_float = x.float() variance = x_float.pow(2).mean(-1, keepdim=True) eps = self.eps if self.eps is not None else torch.finfo(x_float.dtype).eps x_normalized = x_float * torch.rsqrt(variance + eps) if self.weight is not None: return (x_normalized * self.weight).type(x.dtype) return x_normalized.type(x.dtype) class Linear(nn.Linear): def forward(self, x: Tensor) -> Tensor: return F.linear(x, self.weight.to(x.dtype), None if self.bias is None else self.bias.to(x.dtype)) class Conv1d(nn.Conv1d): def _conv_forward( self, x: Tensor, weight: Tensor, bias: Optional[Tensor]) -> Tensor: return super()._conv_forward(x, weight.to(x.dtype), None if bias is None else bias.to(x.dtype)) class Conv2d(nn.Conv2d): def _conv_forward( self, x: Tensor, weight: Tensor, bias: Optional[Tensor]) -> Tensor: return super()._conv_forward( x, weight.to(x.dtype), None if bias is None else bias.to(x.dtype)) class ParameterCycler: def __init__(self, parameters): self.parameters = parameters self.current_idx = 0 def toggle_requires_grad(self): for i, param in enumerate(self.parameters): param.requires_grad = i == self.current_idx self.current_idx = (self.current_idx + 1) % len(self.parameters) def _shape(self, tensor: torch.Tensor, ctx: int, batch: int): return tensor.view(batch, ctx, self.head, self.head_dim).transpose(1, 2).contiguous() def exists(val): return val is not None def default(val, d): return val if exists(val) else d def sinusoids(length, channels, max_timescale=10000): """Returns sinusoids for positional embedding""" assert channels % 2 == 0 log_timescale_increment = np.log(max_timescale) / (channels // 2 - 1) inv_timescales = torch.exp(-log_timescale_increment * torch.arange(channels // 2)) scaled_time = torch.arange(length)[:, np.newaxis] * inv_timescales[np.newaxis, :] return torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], dim=1) class Rotary(nn.Module): def __init__(self, dim, max_seq_len=4096, learned_freq=True): super().__init__() self.dim = dim self.inv_freq = nn.Parameter( 1.0 / (10000 ** (torch.arange(0, dim, 2).float() / dim)), requires_grad=learned_freq ) self.bias = nn.Parameter(torch.zeros(max_seq_len, dim // 2)) def forward(self, positions): if isinstance(positions, int): t = torch.arange(positions, device=self.inv_freq.device).float() else: t = positions.float().to(self.inv_freq.device) freqs = torch.einsum('i,j->ij', t, self.inv_freq) freqs = freqs + self.bias[:freqs.shape[0]] freqs_cis = torch.polar(torch.ones_like(freqs), freqs) return freqs_cis @staticmethod def apply_rotary(x, freqs_cis): x1 = x[..., :freqs_cis.shape[-1]*2] x2 = x[..., freqs_cis.shape[-1]*2:] x1 = x1.float().reshape(*x1.shape[:-1], -1, 2).contiguous() x1 = torch.view_as_complex(x1) x1 = x1 * freqs_cis x1 = torch.view_as_real(x1).flatten(-2) return torch.cat([x1.type_as(x), x2], dim=-1) class Multihead(nn.Module): blend = False cos = False mag = False def __init__(self, dims: int, head: int): super().__init__() self.dims = dims self.head = head head_dim = dims // head self.head_dim = head_dim self.dropout = 0.1 self.q = Linear(dims, dims) self.k = Linear(dims, dims, bias=False) self.v = Linear(dims, dims) self.o = Linear(dims, dims) self.use_betweenness = False if self.use_betweenness: self.betweenness = BetweennessModule(dim=head_dim, window_size=10) self.rotary = Rotary(dim=head_dim, learned_freq=True) if Multihead.blend: self.factor = nn.Parameter(torch.tensor(0.5, **tox)) def compute_cosine_attention(self, q: Tensor, k: Tensor, v: Tensor, mask): ctx = q.shape[1] qn = torch.nn.functional.normalize(q, dim=-1, eps=1e-12) kn = torch.nn.functional.normalize(k, dim=-1, eps=1e-12) qk = torch.matmul(qn, kn.transpose(-1, -2)) if Multihead.mag: qm = torch.norm(q, dim=-1, keepdim=True) km = torch.norm(k, dim=-1, keepdim=True) ms = (qm * km.transpose(-1, -2)) ** 0.5 ms = torch.clamp(ms, min=1e-8) qk = qk * ms if mask is not None: qk = qk + mask[:ctx, :ctx] w = F.softmax(qk.float(), dim=-1).to(q.dtype) w = F.dropout(w, p=self.dropout, training=self.training) out = torch.matmul(w, v) return out, qk def forward(self, x: Tensor, xa: Optional[Tensor] = None, mask = None, kv_cache = None): q = self.q(x) if kv_cache is None or xa is None or self.k not in kv_cache: k = self.k(x if xa is None else xa) v = self.v(x if xa is None else xa) else: k = kv_cache[self.k] v = kv_cache[self.v] out, qk = self._forward(q, k, v, mask) return self.o(out), qk def _forward(self, q: Tensor, k: Tensor, v: Tensor, mask = None): ctx_q = q.shape[1] ctx_k = k.shape[1] ctx = q.shape[1] dims = self.dims scale = (dims // self.head) ** -0.25 q = q.view(*q.shape[:2], self.head, -1).permute(0, 2, 1, 3) k = k.view(*k.shape[:2], self.head, -1).permute(0, 2, 1, 3) v = v.view(*v.shape[:2], self.head, -1).permute(0, 2, 1, 3) if q.shape[2] == k.shape[2]: freqs_cis = self.rotary(ctx_q) q = self.rotary.apply_rotary(q, freqs_cis) k = self.rotary.apply_rotary(k, freqs_cis) else: pos_q = torch.linspace(0, 1, ctx_q, device=q.device) pos_k = torch.linspace(0, 1, ctx_k, device=k.device) freqs_cis_q = self.rotary(pos_q) freqs_cis_k = self.rotary(pos_k) q = self.rotary.apply_rotary(q, freqs_cis_q) k = self.rotary.apply_rotary(k, freqs_cis_k) if Multihead.blend: qk = (q * scale) @ (k * scale).transpose(-1, -2) if mask is not None: qk = qk + mask[:ctx, :ctx] qk = qk.float() w = F.softmax(qk.float(), dim=-1).to(q.dtype) w = F.dropout(w, p=self.dropout, training=self.training) out = torch.matmul(w, v) cos_w, cos_qk = self.compute_cosine_attention(q, k, v, mask) blend = torch.sigmoid(self.factor) out = blend * cos_w + (1 - blend) * out qk = blend * cos_qk + (1 - blend) * qk if Multihead.cos: out, qk = self.compute_cosine_attention(q, k, v, mask) else: qk = (q * scale) @ (k * scale).transpose(-1, -2) if self.use_betweenness: batch, heads, seq_len, head_dim = q.shape q_reshaped = q.reshape(batch * heads, seq_len, head_dim) betweenness = self.betweenness.compute_betweenness(q_reshaped) betweenness = betweenness.view(batch, heads, seq_len) betw_bias = betweenness.unsqueeze(-1) qk = qk + betw_bias if mask is not None: qk = qk + mask[:ctx, :ctx] qk = qk.float() w = F.softmax(qk.float(), dim=-1).to(q.dtype) w = F.dropout(w, p=self.dropout, training=self.training) out = torch.matmul(w, v) out = out.permute(0, 2, 1, 3).flatten(start_dim=2) qk = qk.detach() if self.training else qk return out, qk class Residual(nn.Module): def __init__(self, dims: int, head: int, cross_attention: bool = False, act = "relu"): super().__init__() self.dims = dims self.head = head self.cross_attention = cross_attention self.dropout = 0.1 self.blend_xa = nn.Parameter(torch.tensor(0.5), requires_grad=True) self.blend = torch.sigmoid(self.blend_xa) act_map = {"gelu": nn.GELU(), "relu": nn.ReLU(), "sigmoid": nn.Sigmoid(), "tanh": nn.Tanh(), "leaky_relu": nn.LeakyReLU(), "elu": nn.ELU()} self.act = act_map.get(act, nn.GELU()) self.attna = Multihead(dims=dims, head=head) self.attnb = Multihead(dims=dims, head=head) if cross_attention else None mlp = dims * 4 self.mlp = nn.Sequential(Linear(dims, mlp), self.act, Linear(mlp, dims)) self.lna = RMSNorm(normalized_shape=dims) self.lnb = RMSNorm(normalized_shape=dims) if cross_attention else None self.lnc = RMSNorm(normalized_shape=dims) def forward(self, x, xa=None, mask=None, kv_cache=None): mask = mask if isinstance(self, TextDecoder) else None r = x x = x + self.attna(self.lna(x), mask=mask, kv_cache=kv_cache)[0] if self.attnb and xa is not None: cross_out = self.attnb(self.lnb(x), xa, kv_cache=kv_cache)[0] x = self.blend * x + (1 - self.blend) * cross_out x = x + self.mlp(self.lnc(x)) x = x + r return x class SEBlock(nn.Module): def __init__(self, channels, reduction=16): super().__init__() self.pool = nn.AdaptiveAvgPool1d(1) self.fc = nn.Sequential( nn.Linear(channels, channels // reduction), nn.ReLU(), nn.Linear(channels // reduction, channels), nn.Sigmoid() ) def forward(self, x): b, c, _ = x.size() y = self.pool(x).view(b, c) y = self.fc(y).view(b, c, 1) return x * y class AudioEncoder(nn.Module): def __init__(self, mels: int, ctx: int, dims: int, head: int, layer, act: str = "relu"): super().__init__() self._counter = 0 self.use_betweenness = False self.dims = dims self.head = head self.head_dim = dims // head self.mels = mels self.ctx = ctx self.dropout = 0.1 act_map = {"gelu": nn.GELU(), "relu": nn.ReLU(), "sigmoid": nn.Sigmoid(), "tanh": nn.Tanh(), "leaky_relu": nn.LeakyReLU(), "elu": nn.ELU()} self.act = act_map.get(act, nn.GELU()) self.blend_sw = nn.Parameter(torch.tensor(0.5), requires_grad=True) self.blend = torch.sigmoid(self.blend_sw) self.ln_enc = RMSNorm(normalized_shape=dims) self.register_buffer("positional_embedding", sinusoids(ctx, dims)) if self.use_betweenness: self.betweenness = BetweennessModule(dim=dims, window_size=1, adjustment_scale=0.5) self.se = nn.Sequential( Conv1d(mels, dims, kernel_size=3, padding=1), self.act, Conv1d(dims, dims, kernel_size=3, stride=1, padding=2, dilation=2), Conv1d(dims, dims, kernel_size=3, stride=1, padding=1, groups=dims), Conv1d(dims, dims, kernel_size=1), SEBlock(dims, reduction=16), self.act, nn.Dropout(p=self.dropout), Conv1d(dims, dims, kernel_size=3, stride=1, padding=1) ) self.we = nn.Sequential( nn.Conv1d(1, dims, kernel_size=11, stride=5, padding=5), nn.GELU(), nn.Conv1d(dims, dims, kernel_size=5, stride=2, padding=2), nn.GELU(), nn.AdaptiveAvgPool1d(ctx), ) self.blockA = (nn.ModuleList([Residual(dims=dims, head=head, cross_attention=False, act="relu") for _ in range(layer)]) if layer > 0 else None) def forward(self, x, w) -> Tensor: if x is not None: if w is not None: x_spec = self.se(x).permute(0, 2, 1) w_wave = self.we(w).permute(0, 2, 1) if self._counter < 1: plot_waveform_and_spectrogram(w, x) x = (x_spec + self.positional_embedding).to(x.dtype) w = w_wave x = self.blend * x + (1 - self.blend) * w else: x = self.se(x) x = x.permute(0, 2, 1) assert x.shape[1:] == self.positional_embedding.shape, "incorrect audio shape" x = (x + self.positional_embedding).to(x.dtype) else: assert w is not None, "You have to provide either x or w" x = self.we(w).permute(0, 2, 1) assert x.shape[1:] == self.positional_embedding.shape, "incorrect audio shape" x = (x + self.positional_embedding).to(x.dtype) if self.use_betweenness: be = self.betweenness.compute_betweenness(x) x = x + be.unsqueeze(-1) for block in chain(self.blockA or []): x = block(x) self._counter += 1 return self.ln_enc(x) class TextDecoder(nn.Module): def __init__(self, vocab: int, ctx: int, dims: int, head: int, layer): super().__init__() head_dim = dims // head self.ctx = ctx self.dropout = 0.1 self.token_embedding = nn.Embedding(num_embeddings=vocab, embedding_dim=dims) self.positional_embedding = nn.Parameter(data=torch.empty(ctx, dims)) self.ln_dec = RMSNorm(normalized_shape=dims) self.rotary = Rotary(dim=head_dim, learned_freq=True) self.blockA = (nn.ModuleList([Residual(dims=dims, head=head, cross_attention=False) for _ in range(layer)]) if layer > 0 else None) mask = torch.empty(ctx, ctx).fill_(-np.inf).triu_(1) self.register_buffer("mask", mask, persistent=False) def forward(self, x, xa, kv_cache=None) -> Tensor: offset = next(iter(kv_cache.values())).shape[1] if kv_cache else 0 x = (self.token_embedding(x) + self.positional_embedding[offset: offset + x.shape[-1]]) x = nn.functional.dropout(x, p=self.dropout, training=self.training) ctx = x.shape[1] freqs_cis = self.rotary(ctx) x = self.rotary.apply_rotary(x, freqs_cis) x = x.to(xa.dtype) for block in chain(self.blockA or []): x = block(x, xa=xa, mask=self.mask, kv_cache=kv_cache) x = self.ln_dec(x) logits = (x @ torch.transpose(self.token_embedding.weight.to(x.dtype), 0, 1)).float() return logits class Echo(nn.Module): def __init__(self, param: Dimensions): super().__init__() self.param = param self.encoder = AudioEncoder( mels=param.mels, ctx=param.audio_ctx, dims=param.audio_dims, head=param.audio_head, layer=param.encoder_idx, act=param.act, ) self.decoder = TextDecoder( vocab=param.vocab, ctx=param.text_ctx, dims=param.text_dims, head=param.text_head, layer=param.decoder_idx, ) all_head = torch.zeros(self.param.decoder_idx, self.param.text_head, dtype=torch.bool) all_head[self.param.decoder_idx // 2 :] = True self.register_buffer("alignment_head", all_head.to_sparse(), persistent=False) def set_alignment_head(self, dump: bytes): array = np.frombuffer( gzip.decompress(base64.b85decode(dump)), dtype=bool).copy() mask = torch.from_numpy(array).reshape( self.param.decoder_idx, self.param.text_head) self.register_buffer("alignment_head", mask.to_sparse(), persistent=False) def embed_audio(self, input_features: torch.Tensor): return self.encoder(input_features) def logits(self,input_ids: torch.Tensor, audio_features: torch.Tensor): return self.decoder(input_ids, audio_features) @torch.autocast(device_type="cuda") def forward(self, input_features: torch.Tensor=None, waveform: Optional[torch.Tensor]=None, input_ids=None, labels=None, decoder_inputs_embeds=None, ) -> Dict[str, torch.Tensor]: if input_ids is None and decoder_inputs_embeds is None: if labels is not None: input_ids = shift_tokens_right( labels, self.param.pad_token_id, self.param.decoder_start_token_id) else: raise ValueError("You have to provide either decoder_input_ids or labels") if input_features is not None: if waveform is not None: encoded_audio = self.encoder(x=input_features, w=waveform) else: encoded_audio = self.encoder(x=input_features, w=None) elif waveform is not None: encoded_audio = self.encoder(x=None, w=waveform) else: raise ValueError("You have to provide either input_features or waveform") logits = self.decoder(input_ids, encoded_audio) loss = None if labels is not None: loss = F.cross_entropy( logits.view(-1, logits.shape[-1]), labels.view(-1), ignore_index=-100) return {"logits": logits, "loss": loss, "labels": labels, "input_ids": input_ids, "audio_features": encoded_audio} @property def device(self): return next(self.parameters()).device def install_kv_cache_hooks(self, cache: Optional[dict] = None): cache = {**cache} if cache is not None else {} hooks = [] def save_to_cache(module, _, output): if module not in cache or output.shape[1] > self.param.text_ctx: cache[module] = output else: cache[module] = torch.cat([cache[module], output], dim=1).detach() return cache[module] def save_adaptive_output(module, _, output): if isinstance(output, tuple) and len(output) == 2: tensor_output, cache_updates = output module_k = f"{module}_k" module_v = f"{module}_v" if module_k not in cache or tensor_output.shape[1] > self.param.text_ctx: cache[module_k] = cache_updates["k_cache"] cache[module_v] = cache_updates["v_cache"] else: cache[module_k] = torch.cat([cache[module_k], cache_updates["k_cache"]], dim=1).detach() cache[module_v] = torch.cat([cache[module_v], cache_updates["v_cache"]], dim=1).detach() return tensor_output return output def install_hooks(layer: nn.Module): if isinstance(layer, Multihead): hooks.append(layer.k.register_forward_hook(save_to_cache)) hooks.append(layer.v.register_forward_hook(save_to_cache)) self.encoder.apply(install_hooks) self.decoder.apply(install_hooks) return cache, hooks def _init_weights(self, module): std = 0.02 self.init_counts = {"Linear": 0, "Conv1d": 0, "LayerNorm": 0, "RMSNorm": 0, "Conv2d": 0, "SEBlock": 0, "TextDecoder": 0, "AudioEncoder": 0, "Residual": 0, "Multihead": 0, "MultiheadA": 0, "MultiheadB": 0, "MultiheadC": 0} for name, module in self.named_modules(): if isinstance(module, Linear): nn.init.xavier_uniform_(module.weight) if module.bias is not None: nn.init.zeros_(module.bias) self.init_counts["Linear"] += 1 elif isinstance(module, Conv1d): nn.init.normal_(module.weight, mean=0.0, std=std) if module.bias is not None: nn.init.zeros_(module.bias) self.init_counts["Conv1d"] += 1 elif isinstance(module, LayerNorm): nn.init.ones_(module.weight) nn.init.zeros_(module.bias) self.init_counts["LayerNorm"] += 1 elif isinstance(module, RMSNorm): nn.init.ones_(module.weight) self.init_counts["RMSNorm"] += 1 elif isinstance(module, Multihead): nn.init.xavier_uniform_(module.q.weight) nn.init.zeros_(module.q.bias) nn.init.xavier_uniform_(module.k.weight) nn.init.xavier_uniform_(module.v.weight) nn.init.xavier_uniform_(module.o.weight) if module.o.bias is not None: nn.init.zeros_(module.o.bias) self.init_counts["Multihead"] += 1 elif isinstance(module, Conv2d): nn.init.normal_(module.weight, mean=0.0, std=std) if module.bias is not None: nn.init.zeros_(module.bias) self.init_counts["Conv2d"] += 1 elif isinstance(module, SEBlock): nn.init.ones_(module.fc[0].weight) nn.init.zeros_(module.fc[0].bias) nn.init.ones_(module.fc[2].weight) nn.init.zeros_(module.fc[2].bias) self.init_counts["SEBlock"] += 1 elif isinstance(module, TextDecoder): self.init_counts["TextDecoder"] += 1 elif isinstance(module, AudioEncoder): nn.init.xavier_uniform_(module.se[0].weight) nn.init.zeros_(module.se[0].bias) nn.init.xavier_uniform_(module.se[2].weight) nn.init.zeros_(module.se[2].bias) nn.init.xavier_uniform_(module.se[4].weight) nn.init.zeros_(module.se[4].bias) self.init_counts["AudioEncoder"] += 1 elif isinstance(module, Residual): nn.init.xavier_uniform_(module.attna.q.weight) nn.init.zeros_(module.attna.q.bias) nn.init.xavier_uniform_(module.attna.k.weight) nn.init.xavier_uniform_(module.attna.v.weight) nn.init.xavier_uniform_(module.attna.o.weight) if module.attna.o.bias is not None: nn.init.zeros_(module.attna.o.bias) self.init_counts["Residual"] += 1 def init_weights(self): print("Initializing all weights") self.apply(self._init_weights) print("Initialization summary:") for module_type, count in self.init_counts.items(): print(f"{module_type}: {count}") metric = evaluate.load(path="wer") @dataclass class DataCollator: extractor: Any tokenizer: Any decoder_start_token_id: Any def __call__(self, features: List[Dict[str, Union[List[int], Tensor]]]) -> Dict[str, Tensor]: batch = {} if "input_features" in features[0]: input_features = [{"input_features": f["input_features"]} for f in features] batch["input_features"] = self.extractor.pad(input_features, return_tensors="pt")["input_features"] if "waveform" in features[0]: waveforms = [f["waveform"] for f in features] fixed_len = 1500 * 160 padded_waveforms = [] for w in waveforms: if w.shape[-1] < fixed_len: w = F.pad(w, (0, fixed_len - w.shape[-1])) else: w = w[..., :fixed_len] padded_waveforms.append(w) batch["waveform"] = torch.stack(padded_waveforms) label_features = [{"input_ids": f["labels"]} for f in features] labels_batch = self.tokenizer.pad(label_features, return_tensors="pt") labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) if (labels[:, 0] == self.decoder_start_token_id).all().cpu().item(): labels = labels[:, 1:] batch["labels"] = labels return batch def prepare_dataset(batch, input_features=True, waveform=True): audio = batch["audio"] fixed_len = 1500 * 160 wav = torch.tensor(audio["array"]).float() if wav.shape[-1] < fixed_len: wav = F.pad(wav, (0, fixed_len - wav.shape[-1])) else: wav = wav[..., :fixed_len] if waveform: batch["waveform"] = wav.unsqueeze(0) if input_features: batch["input_features"] = extractor(wav.numpy(), sampling_rate=audio["sampling_rate"]).input_features[0] batch["labels"] = tokenizer(batch["transcription"]).input_ids return batch def compute_metrics(eval_pred): pred_logits = eval_pred.predictions label_ids = eval_pred.label_ids if isinstance(pred_logits, tuple): pred_ids = pred_logits[0] else: pred_ids = pred_logits if pred_ids.ndim == 3: pred_ids = np.argmax(pred_ids, axis=-1) label_ids[label_ids == -100] = tokenizer.pad_token_id pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True) if len(pred_ids) > 0: print("\nSample Predictions:") for idx in range(min(1, len(pred_ids))): print(f" Example {idx+1}:") print(f"• Reference: {label_str[idx]}") print(f"• Prediction: {pred_str[idx]}") print("="*80 + "\n") wer = 100 * metric.compute(predictions=pred_str, references=label_str) pred_flat = pred_ids.flatten() labels_flat = label_ids.flatten() mask = labels_flat != tokenizer.pad_token_id acc = accuracy_score(y_true=labels_flat[mask], y_pred=pred_flat[mask]) pre = precision_score(y_true=labels_flat[mask], y_pred=pred_flat[mask], average='weighted', zero_division=0) rec = recall_score(y_true=labels_flat[mask], y_pred=pred_flat[mask], average='weighted', zero_division=0) f1 = f1_score(y_true=labels_flat[mask], y_pred=pred_flat[mask], average='weighted', zero_division=0) return { "wer": wer, "accuracy": acc, "precision": pre, "recall": rec, "f1": f1, } class MaxFactor(torch.optim.Optimizer): __version__ = "1.0" def __init__(self, params, lr=0.025, beta2_decay=-0.8, eps=(1e-10, 1e-4), d=1.0, weight_decay=0.025, gamma=0.99, max=False, min_lr=1e-7): print(f"Using MaxFactor optimizer v{self.__version__}") defaults = dict(lr=lr, beta2_decay=beta2_decay, eps=eps, d=d, weight_decay=weight_decay, gamma=gamma, max=max, min_lr=min_lr) super().__init__(params=params, defaults=defaults) def get_lr(self): """Return last-used learning rates for all parameter groups.""" param_specific_lrs = [] for group in self.param_groups: group_lrs = [] for p in group["params"]: state = self.state[p] if "last_alpha" in state: group_lrs.append(state["last_alpha"]) if group_lrs: param_specific_lrs.append(sum(group_lrs) / len(group_lrs)) else: param_specific_lrs.append(group["lr"]) return param_specific_lrs def get_last_lr(self): return self.get_lr() @torch.no_grad() def step(self, closure=None): loss = None if closure is not None: with torch.enable_grad(): loss = closure() for group in self.param_groups: params_with_grad, grads, row_vars, col_vars, v, state_steps = [], [], [], [], [], [] eps1, eps2 = group["eps"] min_lr = group.get("min_lr", 1e-7) for p in group["params"]: if p.grad is None: continue grad = p.grad if grad.dtype in {torch.float16, torch.bfloat16}: grad = grad.float() state = self.state[p] if len(state) == 0: state["step"] = torch.tensor(0.0, dtype=torch.float32) if p.dim() > 1: row_shape, col_shape = list(p.shape), list(p.shape) row_shape[-1], col_shape[-2] = 1, 1 state["row_var"] = p.new_zeros(row_shape) state["col_var"] = p.new_zeros(col_shape) state["v"] = torch.zeros_like(p, memory_format=torch.preserve_format) row_vars.append(state.get("row_var", None)) col_vars.append(state.get("col_var", None)) v.append(state["v"]) state_steps.append(state["step"]) params_with_grad.append(p) grads.append(grad) for i, param in enumerate(params_with_grad): grad = grads[i] state = self.state[param] if group["max"]: grad = -grad step_t = state_steps[i] row_var, col_var, vi = row_vars[i], col_vars[i], v[i] if eps1 is None: eps1 = torch.finfo(param.dtype).eps step_t += 1 step_float = step_t.item() one_minus_beta2_t = min(0.999, max(0.001, step_float ** group["beta2_decay"])) rho_t = max(min_lr, min(group["lr"], 1.0 / (step_float ** 0.5))) alpha = max(eps2, (param.norm() / (param.numel() ** 0.5 + 1e-12)).item()) * rho_t state["last_alpha"] = alpha if group["weight_decay"] > 0: param.mul_(1 - group["lr"] * group["weight_decay"]) if grad.dim() > 1: row_mean = torch.norm(grad, dim=-1, keepdim=True).square_() row_mean.div_(grad.size(-1) + eps1) row_var.lerp_(row_mean, one_minus_beta2_t) col_mean = torch.norm(grad, dim=-2, keepdim=True).square_() col_mean.div_(grad.size(-2) + eps1) col_var.lerp_(col_mean, one_minus_beta2_t) var_estimate = row_var @ col_var max_row_var = row_var.max(dim=-2, keepdim=True)[0] var_estimate.div_(max_row_var.clamp_(min=eps1)) else: vi.mul_(group["gamma"]).add_(grad.square_(), alpha=1 - group["gamma"]) var_estimate = vi update = var_estimate.clamp_(min=eps1 * eps1).rsqrt_().mul_(grad) inf_norm = torch.norm(update, float('inf')) if inf_norm > 0: update.div_(inf_norm.clamp_(min=eps1)) denom = max(1.0, update.norm(2).item() / ((update.numel() ** 0.5) * group["d"])) if param.dim() > 1: max_vals = update.abs().max(dim=-1, keepdim=True)[0] param.add_(-alpha / denom * update.sign() * max_vals) else: param.add_(-alpha / denom * update) state["step"] = step_t return loss if __name__ == "__main__": param = Dimensions( mels=128, audio_ctx=1500, audio_head=4, encoder_idx=4, audio_dims=512, vocab=51865, text_ctx=512, text_head=4, decoder_idx=4, text_dims=512, decoder_start_token_id = 50258, pad_token_id = 50257, eos_token_id = 50257, act = "gelu", ) model = Echo(param).to('cuda') token="" extractor = WhisperFeatureExtractor.from_pretrained( "openai/whisper-small", token=token, feature_size=128, sampling_rate=16000, do_normalize=True, return_tensors="pt", chunk_length=15) tokenizer = WhisperTokenizerFast.from_pretrained( "openai/whisper-small", language="en", task="transcribe", token=token) data_collator = DataCollator(extractor=extractor, tokenizer=tokenizer, decoder_start_token_id=50258) log_dir = os.path.join('./output/logs', datetime.now().strftime(format='%m-%d_%H')) os.makedirs(name=log_dir, exist_ok=True) dataset = DatasetDict() dataset = load_dataset("google/fleurs", "en_us", token=token, trust_remote_code=True, streaming=False) dataset = dataset.cast_column(column="audio", feature=Audio(sampling_rate=16000)) dataset = dataset.map(function=prepare_dataset, remove_columns=list(next(iter(dataset.values())).features)).with_format(type="torch") training_args = Seq2SeqTrainingArguments( output_dir=log_dir, per_device_train_batch_size=1, per_device_eval_batch_size=1, gradient_accumulation_steps=1, eval_accumulation_steps=1, tf32=True, bf16=True, eval_strategy="steps", save_strategy="steps", max_steps=10000, save_steps=10000, eval_steps=1000, warmup_steps=1000, num_train_epochs=1, logging_steps=100, logging_dir=log_dir, report_to=["tensorboard"], push_to_hub=False, disable_tqdm=False, save_total_limit=1, label_names=["labels"], eval_on_start=False, # optim="adafactor", save_safetensors=True, ) optimizer = MaxFactorA(model.parameters(), lr = 0.025, beta2_decay = -0.8, eps = (1e-10, 0.0001), d = 1, weight_decay = 0.025, gamma = 0.99, max = False, min_lr = 1e-7) scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=1000, eta_min=1e-6) trainer = Seq2SeqTrainer( args=training_args, model=model, train_dataset=dataset["train"].shuffle(seed=42).take(1000), eval_dataset=dataset["test"].take(100), data_collator=data_collator, compute_metrics=compute_metrics, processing_class=extractor, optimizers=(optimizer, scheduler), ) model.init_weights() print("Trainable parameters:", sum(p.numel() for p in model.parameters() if p.requires_grad)) print("Total parameters:", sum(p.numel() for p in model.parameters())) trainer.train(resume_from_checkpoint=False) ## pytorch loop # def train( # model, # dataset, # data_collator, # tokenizer, # optimizer=None, # scheduler=None, # train_set=None, # eval_set=None, # epochs=3, # batch_size=1, # lr=2e-4, # device="cuda", # grad_accum_steps=1, # max_grad_norm=1.0, # log_dir="./output/logs", # save_best=True, # early_stopping_patience=None, # max_steps=10000, # eval_steps=1000, # ): # from torch.utils.tensorboard import SummaryWriter # import os # writer = SummaryWriter(log_dir=log_dir) # model = model.to(device) # optimizer = optimizer # scheduler = scheduler # scaler = torch.amp.GradScaler('cuda') # train_set = dataset["train"] # eval_set = dataset["test"] # train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True, collate_fn=data_collator) # eval_loader = DataLoader(eval_set, batch_size=batch_size, shuffle=False, collate_fn=data_collator) # best_wer = float("inf") # best_step = 0 # patience_counter = 0 # global_step = 0 # running_loss = 0 # train_iter = iter(train_loader) # pbar = tqdm(total=max_steps, desc="Training", dynamic_ncols=True) # model.train() # optimizer.zero_grad() # while global_step < max_steps: # try: # batch = next(train_iter) # except StopIteration: # train_iter = iter(train_loader) # batch = next(train_iter) # for k in batch: # if isinstance(batch[k], torch.Tensor): # batch[k] = batch[k].to(device) # with torch.cuda.amp.autocast(): # outputs = model( # input_features=batch.get("input_features", None), # waveform=batch.get("waveform", None), # input_ids=None, # labels=batch["labels"] # ) # loss = outputs["loss"] / grad_accum_steps # scaler.scale(loss).backward() # running_loss += loss.item() * grad_accum_steps # if (global_step + 1) % grad_accum_steps == 0: # scaler.unscale_(optimizer) # torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) # scaler.step(optimizer) # scaler.update() # optimizer.zero_grad() # if scheduler is not None: # scheduler.step() # writer.add_scalar("train/loss", loss.item() * grad_accum_steps, global_step) # writer.add_scalar("train/lr", optimizer.param_groups[0]["lr"], global_step) # pbar.set_postfix({ # "loss": f"{loss.item() * grad_accum_steps:.4f}", # "lr": optimizer.param_groups[0]["lr"] # }) # pbar.update(1) # global_step += 1 # if global_step % eval_steps == 0 or global_step == max_steps: # model.eval() # all_preds, all_labels = [], [] # eval_loss = 0 # with torch.no_grad(): # for batch_eval in tqdm(eval_loader, desc=f"Eval@step{global_step}", leave=False): # for k in batch_eval: # if isinstance(batch_eval[k], torch.Tensor): # batch_eval[k] = batch_eval[k].to(device) # outputs = model( # input_features=batch_eval.get("input_features", None), # waveform=batch_eval.get("waveform", None), # input_ids=None, # labels=batch_eval["labels"] # ) # logits = outputs["logits"] # labels = batch_eval["labels"] # loss = outputs["loss"] # eval_loss += loss.item() # preds = torch.argmax(logits, dim=-1) # labels_for_decode = labels.clone() # labels_for_decode[labels_for_decode == -100] = tokenizer.pad_token_id # all_preds.extend(preds.cpu().numpy()) # all_labels.extend(labels_for_decode.cpu().numpy()) # avg_eval_loss = eval_loss / len(eval_loader) # pred_str = tokenizer.batch_decode(all_preds, skip_special_tokens=True) # label_str = tokenizer.batch_decode(all_labels, skip_special_tokens=True) # if len(all_preds) > 0: # print("\nSample Predictions:") # for idx in range(min(1, len(all_preds))): # print(f" Example {idx+1}:") # print(f"• Reference: {label_str[idx]}") # print(f"• Prediction: {pred_str[idx]}") # print("="*80 + "\n") # wer = 100 * metric.compute(predictions=pred_str, references=label_str) # writer.add_scalar("eval/loss", avg_eval_loss, global_step) # writer.add_scalar("eval/wer", wer, global_step) # # scheduler.step(avg_eval_loss) # scheduler.step() # lr = scheduler.get_last_lr()[0] # pbar.set_postfix({ # "loss": f"{loss.item() * grad_accum_steps:.4f}", # "lr": lr, # "eval_wer": f"{wer:.2f}" # }) # print(f"\nStep {global_step}: eval loss {avg_eval_loss:.4f}, WER {wer:.2f}") # # Save best model # if save_best and wer < best_wer: # best_wer = wer # best_step = global_step # torch.save(model.state_dict(), os.path.join(log_dir, "best_model.pt")) # print(f"Best model saved at step {global_step} with WER {wer:.2f}") # # Early stopping # if early_stopping_patience is not None: # if wer < best_wer: # patience_counter = 0 # else: # patience_counter += 1 # if patience_counter >= early_stopping_patience: # print(f"Early stopping at step {global_step}") # break # model.train() # lr = scheduler.get_last_lr()[0] # writer.add_scalar("train/lr", lr, global_step) # pbar.set_postfix({ # "loss": f"{loss.item() * grad_accum_steps:.4f}", # "lr": lr, # "eval_wer": f"{wer:.2f}" # }) # print(f"Training complete. Best WER: {best_wer:.2f} at step {best_step}") # writer.close() # if __name__ == "__main__": # param = Dimensions( # mels=128, # audio_ctx=1500, # audio_head=4, # encoder_idx=4, # audio_dims=512, # vocab=51865, # text_ctx=512, # text_head=4, # decoder_idx=4, # text_dims=512, # decoder_start_token_id = 50258, # pad_token_id = 50257, # eos_token_id = 50257, # act = "gelu", # ) # model = Echo(param).to('cuda') # token="" # extractor = WhisperFeatureExtractor.from_pretrained( # "openai/whisper-small", token=token, feature_size=128, sampling_rate=16000, do_normalize=True, return_tensors="pt", chunk_length=15) # tokenizer = WhisperTokenizerFast.from_pretrained( # "openai/whisper-small", language="en", task="transcribe", token=token) # data_collator = DataCollator(extractor=extractor, # tokenizer=tokenizer, decoder_start_token_id=50258) # log_dir = os.path.join('./output/logs', datetime.now().strftime(format='%m-%d_%H')) # os.makedirs(name=log_dir, exist_ok=True) # dataset = DatasetDict() # dataset = load_dataset("google/fleurs", "en_us", token=token, trust_remote_code=True, streaming=False) # dataset = dataset.cast_column(column="audio", feature=Audio(sampling_rate=16000)) # dataset = dataset.map(function=prepare_dataset, # remove_columns=list(next(iter(dataset.values())).features)).with_format(type="torch") # optimizer = MaxFactorA(model.parameters(), lr = 0.025, # beta2_decay = -0.8, # eps = (1e-10, 0.0001), # d = 1, # weight_decay = 0.025, # gamma = 0.99, # max = False, # min_lr = 1e-7) # scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=1000, eta_min=1e-6) # train_set = dataset["train"], # eval_set = dataset["test"], # train(model=model, dataset=dataset, data_collator=data_collator, tokenizer=tokenizer, # batch_size=1, # lr=2e-4, # device="cuda", # grad_accum_steps=1, # max_grad_norm=1.0, # log_dir="./output/logs", # save_best=True, # early_stopping_patience=None, # max_steps=10000, # eval_steps=1000, # optimizer=optimizer, # scheduler=scheduler, # train_set=train_set, # eval_set=eval_set, # ) # tensorboard --logdir ./output/logs ```
qwertyuiopasdfg/glm4-32B-4bit
qwertyuiopasdfg
2025-04-25T06:09:29Z
0
0
null
[ "safetensors", "glm4", "zh", "en", "base_model:THUDM/GLM-4-32B-0414", "base_model:quantized:THUDM/GLM-4-32B-0414", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
null
2025-04-25T05:51:42Z
--- license: mit language: - zh - en base_model: - THUDM/GLM-4-32B-0414 --- bnb-4bits Quantized Version of ***[THUDM/GLM-4-32B-0414](https://huggingface.co/THUDM/GLM-4-32B-0414)***
firoz123/codegemma-2b-Q4_K_M-GGUF
firoz123
2025-04-25T06:07:26Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:google/codegemma-2b", "base_model:quantized:google/codegemma-2b", "license:gemma", "endpoints_compatible", "region:us" ]
null
2025-04-25T06:07:14Z
--- base_model: google/codegemma-2b library_name: transformers license: gemma license_link: https://ai.google.dev/gemma/terms tags: - llama-cpp - gguf-my-repo extra_gated_heading: Access CodeGemma on Hugging Face extra_gated_prompt: To access CodeGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # firoz123/codegemma-2b-Q4_K_M-GGUF This model was converted to GGUF format from [`google/codegemma-2b`](https://huggingface.co/google/codegemma-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/google/codegemma-2b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo firoz123/codegemma-2b-Q4_K_M-GGUF --hf-file codegemma-2b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo firoz123/codegemma-2b-Q4_K_M-GGUF --hf-file codegemma-2b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo firoz123/codegemma-2b-Q4_K_M-GGUF --hf-file codegemma-2b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo firoz123/codegemma-2b-Q4_K_M-GGUF --hf-file codegemma-2b-q4_k_m.gguf -c 2048 ```