Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Teera/sentence-transformers-mini-thai-v3
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:19:22+00:00
text-generation
transformers
{}
PageTurnIO/long-mamba-squad-v2-copy-task
null
[ "transformers", "safetensors", "mamba", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:19:30+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HPY_gpt2_vB.2 This model is a fine-tuned version of [ClassCat/gpt2-base-french](https://huggingface.co/ClassCat/gpt2-base-french) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5851 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 139 | 1.6318 | | No log | 1.99 | 279 | 1.6035 | | No log | 3.0 | 419 | 1.5899 | | 1.5825 | 3.97 | 556 | 1.5851 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
{"license": "cc-by-sa-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "HPY_gpt2_vB.2", "results": []}]}
azizkt/HPY_gpt2_vB.2
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:20:29+00:00
null
null
{}
corenet-community/masrcnn-vit-huge
null
[ "region:us" ]
null
2024-04-29T14:20:38+00:00
token-classification
flair
## HunFlair2 model for PROMOTER [HunFlair](https://github.com/flairNLP/flair/blob/master/resources/docs/HUNFLAIR2.md) (biomedical flair) for enhancer entity: - pre-trained language model: michiyasunaga/BioLinkBERT-base - fine-tuned on RegEl corpus for `Promoter` entity type Predicts 1 tag: | **tag** | **meaning** | | -------- | ------------------- | | Promoter | DNA promoter region | ______________________________________________________________________ ## Info ### Demo: How to use in Flair Requires: - **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`) ```python from flair.data import Sentence from flair.nn import Classifier from flair.tokenization import SciSpacyTokenizer text = "The upstream region of the glnA gene contained two putative extended promoter consensus sequences (p1 and p2)." sentence = Sentence(text, use_tokenizer=SciSpacyTokenizer()) tagger = Classifier.load("regel-corpus/hunflair2-regel-promoter") tagger.predict(sentence) print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ```
{"language": "en", "tags": ["flair", "hunflair", "token-classification", "sequence-tagger-model"], "widget": [{"text": "Two putative extended promoters consensus sequences (p1 and p2)."}]}
regel-corpus/hunflair2-regel-promoter
null
[ "flair", "pytorch", "hunflair", "token-classification", "sequence-tagger-model", "en", "region:us" ]
null
2024-04-29T14:21:05+00:00
null
transformers
# Uploaded model - **Developed by:** robgonsalves - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
robgonsalves/fan-fabler-gguf
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:21:25+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo beomi/Llama-3-Open-Ko-8B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/beomi-Llama-3-Open-Ko-8B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("beomi/Llama-3-Open-Ko-8B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model beomi/Llama-3-Open-Ko-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "beomi/Llama-3-Open-Ko-8B"}
PrunaAI/beomi-Llama-3-Open-Ko-8B-AWQ-4bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:beomi/Llama-3-Open-Ko-8B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-29T14:21:44+00:00
null
null
{}
nndang/checkpoint_wav2vec_synthetic_journal_40
null
[ "region:us" ]
null
2024-04-29T14:21:54+00:00
null
null
The trained models for https://github.com/pastelite/game_detection_ai
{"license": "creativeml-openrail-m"}
pastelite/game-classification
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-29T14:22:22+00:00
null
null
{}
Sigmaasik/Char.genshin
null
[ "region:us" ]
null
2024-04-29T14:23:29+00:00
null
null
{"license": "openrail"}
otmanabs/gcam
null
[ "safetensors", "license:openrail", "region:us" ]
null
2024-04-29T14:23:32+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-Instruct-spider This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "Mistral-7B-Instruct-spider", "results": []}]}
VictorDCh/Mistral-7B-Instruct-spider
null
[ "peft", "tensorboard", "safetensors", "mistral", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-29T14:23:38+00:00
token-classification
flair
## HunFlair2 model for TFBS [HunFlair](https://github.com/flairNLP/flair/blob/master/resources/docs/HUNFLAIR2.md) (biomedical flair) for enhancer entity: - pre-trained language model: michiyasunaga/BioLinkBERT-base - fine-tuned on RegEl corpus for `Tfbs` entity type Predicts 1 tag: | **tag** | **meaning** | | ------- | ---------------------------------------- | | Tfbs | DNA region bound by transcription factor | ______________________________________________________________________ ## Info ### Demo: How to use in Flair Requires: - **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`) ```python from flair.data import Sentence from flair.nn import Classifier from flair.tokenization import SciSpacyTokenizer text = "We found that Egr-1 specifically binds to the PTEN 5' untranslated region, which contains a functional GCGGCGGCG Egr-1-binding site." sentence = Sentence(text, use_tokenizer=SciSpacyTokenizer()) tagger = Classifier.load("regel-corpus/hunflair2-regel-tfbs") tagger.predict(sentence) print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ```
{"language": "en", "tags": ["flair", "hunflair", "token-classification", "sequence-tagger-model"], "widget": [{"text": "It contains a functional GCGGCGGCG Egr-1-binding site"}]}
regel-corpus/hunflair2-regel-tfbs
null
[ "flair", "pytorch", "hunflair", "token-classification", "sequence-tagger-model", "en", "region:us" ]
null
2024-04-29T14:24:13+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/r818q55
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:24:28+00:00
null
null
{}
Edsworth/x-ray_hentai
null
[ "region:us" ]
null
2024-04-29T14:27:17+00:00
null
null
# ryota39-Phi-3-mini-4k-instruct-dpo-gguf [ryota39さんが公開しているPhi-3-mini-4k-instruct-dpo](https://huggingface.co/ryota39/Phi-3-mini-4k-instruct-dpo)のggufフォーマット変換版です。 imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。 ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'ryota39-Phi-3-mini-4k-instruct-dpo-Q4_0.gguf' -p "<|user|>\n今晩の夕食のレシピを教えて<|end>\n<|assistant|>\n" -n 128 ```
{"language": ["en", "ja"], "license": "mit", "datasets": ["TFMC/imatrix-dataset-for-japanese-llm"]}
mmnga/ryota39-Phi-3-mini-4k-instruct-dpo-gguf
null
[ "gguf", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:mit", "region:us" ]
null
2024-04-29T14:27:25+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-Instruct-HQQ-4bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-Instruct-HQQ-4bit-smashed") tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"}
PrunaAI/meta-llama-Meta-Llama-3-8B-Instruct-HQQ-4bit-smashed
null
[ "transformers", "llama", "text-generation", "pruna-ai", "conversational", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:28:03+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-HQQ-4bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-HQQ-4bit-smashed") tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "meta-llama/Meta-Llama-3-8B"}
PrunaAI/meta-llama-Meta-Llama-3-8B-HQQ-4bit-smashed
null
[ "transformers", "llama", "text-generation", "pruna-ai", "base_model:meta-llama/Meta-Llama-3-8B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:28:06+00:00
null
null
{"license": "apache-2.0"}
Phin4real/newfocus
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-29T14:28:38+00:00
text-generation
transformers
{}
PageTurnIO/long-mamba-squad-v2-ref-task
null
[ "transformers", "safetensors", "mamba", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:29:50+00:00
null
transformers
# Uploaded model - **Developed by:** SubashNeupane - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
SubashNeupane/llama-3-8b-Instruct-bnb-4bit-medicalQA2
null
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:30:00+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
rbgo/infer-Llama-3-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-29T14:30:09+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-Instruct-HQQ-4bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-Instruct-HQQ-4bit-smashed") tokenizer = AutoTokenizer.from_pretrained("NousResearch/Meta-Llama-3-8B-Instruct") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Meta-Llama-3-8B-Instruct"}
PrunaAI/NousResearch-Meta-Llama-3-8B-Instruct-HQQ-4bit-smashed
null
[ "transformers", "llama", "text-generation", "pruna-ai", "conversational", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:30:29+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo NousResearch/Meta-Llama-3-8B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-4bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-4bit-smashed") tokenizer = AutoTokenizer.from_pretrained("NousResearch/Meta-Llama-3-8B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Meta-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Meta-Llama-3-8B"}
PrunaAI/NousResearch-Meta-Llama-3-8B-HQQ-4bit-smashed
null
[ "transformers", "llama", "text-generation", "pruna-ai", "base_model:NousResearch/Meta-Llama-3-8B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:30:34+00:00
null
null
{}
corenet-community/coco-vit-base
null
[ "region:us" ]
null
2024-04-29T14:31:33+00:00
text-classification
transformers
## TextAttack Model Card This `distilbert` model was fine-tuned using TextAttack. The model was fine-tuned for 3 epochs with a batch size of 8, a maximum sequence length of 512, and an initial learning rate of 3e-05. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9543333333333334, as measured by the eval set accuracy, found after 3 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
{"language": ["zh"], "license": "apache-2.0", "metrics": ["accuracy"], "pipeline_tag": "text-classification"}
WangA/distilbert-base-finetuned-ctrip
null
[ "transformers", "safetensors", "distilbert", "text-classification", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:31:38+00:00
null
null
{}
corenet-community/coco-vit-large
null
[ "region:us" ]
null
2024-04-29T14:32:02+00:00
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut_synDB_plus This model is a fine-tuned version of [Donut01/donut_synDB_wplus](https://huggingface.co/Donut01/donut_synDB_wplus) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0612 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 5 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 6 - total_train_batch_size: 30 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1308 | 0.98 | 32 | 0.0953 | | 0.0601 | 1.99 | 65 | 0.0854 | | 0.0465 | 2.94 | 96 | 0.0612 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "Donut01/donut_synDB_wplus", "model-index": [{"name": "donut_synDB_plus", "results": []}]}
Donut01/donut_synDB_plus
null
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "base_model:Donut01/donut_synDB_wplus", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:32:59+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_2iters_bs256_nodpo_full6w_iter_2 This model is a fine-tuned version of [ShenaoZhang/0.001_2iters_bs256_nodpo_full6w_iter_1](https://huggingface.co/ShenaoZhang/0.001_2iters_bs256_nodpo_full6w_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZhang/0.001_2iters_bs256_nodpo_full6w_iter_1", "model-index": [{"name": "0.001_2iters_bs256_nodpo_full6w_iter_2", "results": []}]}
ShenaoZhang/0.001_2iters_bs256_nodpo_full6w_iter_2
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.001_2iters_bs256_nodpo_full6w_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:33:32+00:00
reinforcement-learning
stable-baselines3
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pietroorlandi -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pietroorlandi -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga pietroorlandi ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
{"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "394.50 +/- 119.84", "name": "mean_reward", "verified": false}]}]}]}
pietroorlandi/dqn-spaceinvaders-rlzoo
null
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-29T14:33:47+00:00
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
usr-bin-ksh/gsn-sh-kolama-finetune2
null
[ "transformers", "safetensors", "autotrain", "text-generation-inference", "text-generation", "peft", "conversational", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:33:51+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/dwr97r8
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:34:57+00:00
null
fastai
# Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
{"tags": ["fastai"]}
mendozalopez/adventuretime
null
[ "fastai", "region:us", "has_space" ]
null
2024-04-29T14:35:01+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # training_with_callbacks This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1529 - Precision: 0.4993 - Recall: 0.5397 - F1: 0.5187 - Accuracy: 0.9661 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 205 | 0.1641 | 0.3048 | 0.3556 | 0.3282 | 0.9556 | | No log | 2.0 | 410 | 0.1387 | 0.4741 | 0.4365 | 0.4545 | 0.9642 | | 0.1943 | 3.0 | 615 | 0.1430 | 0.4690 | 0.4810 | 0.4749 | 0.9648 | | 0.1943 | 4.0 | 820 | 0.1481 | 0.4993 | 0.5365 | 0.5172 | 0.9655 | | 0.0496 | 5.0 | 1025 | 0.1529 | 0.4993 | 0.5397 | 0.5187 | 0.9661 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.0+cpu - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "training_with_callbacks", "results": []}]}
cria111/training_with_callbacks
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:35:09+00:00
sentence-similarity
sentence-transformers
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 373 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 50, "evaluator": "sentence_transformers.evaluation.InformationRetrievalEvaluator.InformationRetrievalEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 74, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
nhinbm/recruit_finetune
null
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:36:30+00:00
text-to-image
diffusers
# SDXL LoRA DreamBooth - aarashfeizi/jean-francois-godbout-batch4-repeats4-rank32-snr5.0 <Gallery /> ## Model description ### These are aarashfeizi/jean-francois-godbout-batch4-repeats4-rank32-snr5.0 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch4-repeats4-rank32-snr5.0.safetensors` here 💾](/aarashfeizi/jean-francois-godbout-batch4-repeats4-rank32-snr5.0/blob/main//home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch4-repeats4-rank32-snr5.0.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch4-repeats4-rank32-snr5.0:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch4-repeats4-rank32-snr5.0_emb.safetensors` here 💾](/aarashfeizi/jean-francois-godbout-batch4-repeats4-rank32-snr5.0/blob/main//home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch4-repeats4-rank32-snr5.0_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch4-repeats4-rank32-snr5.0_emb` to your prompt. For example, `A photo of /home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch4-repeats4-rank32-snr5.0_emb` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('aarashfeizi/jean-francois-godbout-batch4-repeats4-rank32-snr5.0', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='aarashfeizi/jean-francois-godbout-batch4-repeats4-rank32-snr5.0', filename='/home/mila/f/feiziaar/scratch/dreambooth-outputs/jean-francois-godbout-batch4-repeats4-rank32-snr5.0_emb.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('A photo of <s0><s1> giving a speech').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/aarashfeizi/jean-francois-godbout-batch4-repeats4-rank32-snr5.0/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
{"license": "openrail++", "tags": ["stable-diffusion-xl", "stable-diffusion-xl-diffusers", "diffusers-training", "text-to-image", "diffusers", "lora", "template:sd-lora"], "widget": [{"text": "A photo of <s0><s1> giving a speech", "output": {"url": "image_0.png"}}, {"text": "A photo of <s0><s1> giving a speech", "output": {"url": "image_1.png"}}, {"text": "A photo of <s0><s1> giving a speech", "output": {"url": "image_2.png"}}, {"text": "A photo of <s0><s1> giving a speech", "output": {"url": "image_3.png"}}], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "A photo of <s0><s1>"}
aarashfeizi/jean-francois-godbout-batch4-repeats4-rank32-snr5.0
null
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "diffusers-training", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-29T14:36:56+00:00
null
null
{"license": "apache-2.0"}
tywinlu1988/123
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-29T14:36:59+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "Universal-NER/UniNER-7B-type"}
jc80622/unilora_sec151_populated
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Universal-NER/UniNER-7B-type", "region:us" ]
null
2024-04-29T14:37:03+00:00
null
null
{}
magicalwhisper/Hivetrain
null
[ "region:us" ]
null
2024-04-29T14:37:26+00:00
null
null
{}
LAKSHM11-G/pegasus-x-base-pegasus_article_summarization_base3
null
[ "region:us" ]
null
2024-04-29T14:37:29+00:00
null
fastai
# Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
{"tags": ["fastai"]}
Ronicola/cardSorter
null
[ "fastai", "region:us", "has_space" ]
null
2024-04-29T14:37:36+00:00
null
null
{}
LaylansVoice/FreddieDredd
null
[ "region:us" ]
null
2024-04-29T14:38:24+00:00
image-classification
transformers
# Fine-tuned Vision Transformer for Alzheimer's Detection This repository hosts a Vision Transformer (ViT) model fine-tuned on the OASIS MRI dataset for the classification of brain MRI images based on the progression of Alzheimer's disease. The model categorizes images into four classes: demented, very mild demented, mild demented, and non-demented. ## Model Description The Vision Transformer has been adapted to tackle the challenging task of medical image analysis by leveraging its powerful attention mechanisms that capture complex patterns in image data. It has been fine-tuned to classify MRI images into stages of Alzheimer's disease, demonstrating the model's applicability to medical diagnostics. ## Dataset The OASIS MRI dataset consists of 80,000 brain MRI images from 461 patients, formatted in Nifti (.nii) and converted to JPEG for model training. The images represent various stages of Alzheimer's disease as follows: - Non-Demented - Very Mild Demented - Mild Demented - Demented This dataset conversion involved standardizing image formats for machine learning applications, ensuring that each image is suitable for deep learning models. ## Preprocessing Techniques During preprocessing: - MRI scans were converted from Nifti format to JPEG to simplify handling and reduce storage requirements. - Each image was resized to 128x128 pixels, ensuring uniformity across the dataset. - Pixel values were normalized to a [0, 1] scale to facilitate model training. ## How to Use This Model You can use this model directly with a pipeline for image classification: \`\`\`python import torch from transformers import ViTForImageClassification from PIL import Image import numpy as np from torchvision.transforms import Compose, Resize, ToTensor, Normalize id2label = { 0: "Mild Dementia", 1: "Moderate Dementia", 2: "Non Demented", 3: "Very mild Dementia" } import torch from transformers import ViTForImageClassification from PIL import Image import numpy as np from torchvision.transforms import Compose, Resize, ToTensor, Normalize import matplotlib.pyplot as plt # Set the device device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Load the model model = ViTForImageClassification.from_pretrained('fawadkhan/ViT_FineTuned_on_ImagesOASIS') model.to(device) model.eval() # Define the image path image_path = 'your image path.jpg' image = Image.open(image_path).convert("RGB") # Define the transformations transform = Compose([ Resize((224, 224)), # or the original input size of your model ToTensor(), Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) # Standard normalization for ImageNet ]) # Preprocess the image input_tensor = transform(image).unsqueeze(0) # Create a mini-batch as expected by the model input_tensor = input_tensor.to(device) # Predict with torch.no_grad(): outputs = model(input_tensor) _, predicted = torch.max(outputs.logits, 1) # Retrieve the class name predicted_class = id2label[predicted[0].item()] print("Predicted class:", predicted_class) # Plot the image and the prediction plt.imshow(image) plt.title(f'Predicted class: {predicted_class}') plt.axis('off') # Turn off axis numbers and ticks plt.show() \`\`\` ## Training Procedure The model was trained using the AdamW optimizer with a learning rate of 5e-5 for 10 epochs, balancing the need for accuracy with the risk of overfitting. ## Evaluation Results Upon evaluation on a validation set, the model achieved an accuracy of 99%, showcasing its effectiveness in identifying different stages of Alzheimer's disease based on MRI scans.
{"library_name": "transformers"}
fawadkhan/ViT_FineTuned_on_ImagesOASIS
null
[ "transformers", "safetensors", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:39:09+00:00
null
null
{}
fawadkhan/feature_extractor
null
[ "region:us" ]
null
2024-04-29T14:39:36+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT2-705M This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.5805 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00025 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.872 | 1.0 | 3 | 6.7840 | | 6.4623 | 2.0 | 6 | 6.5344 | | 6.1624 | 3.0 | 9 | 6.1049 | | 4.8878 | 4.0 | 12 | 5.8237 | | 4.908 | 5.0 | 15 | 5.1010 | | 4.6666 | 6.0 | 18 | 5.0636 | | 4.4854 | 7.0 | 21 | 4.7967 | | 5.0298 | 8.0 | 24 | 5.1645 | | 4.4216 | 9.0 | 27 | 4.4990 | | 4.1914 | 10.0 | 30 | 4.3240 | | 3.909 | 11.0 | 33 | 4.1773 | | 3.8537 | 12.0 | 36 | 3.9425 | | 3.4798 | 13.0 | 39 | 3.8305 | | 3.487 | 14.0 | 42 | 3.8480 | | 3.1947 | 15.0 | 45 | 3.5805 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
{"tags": ["generated_from_trainer"], "model-index": [{"name": "GPT2-705M", "results": []}]}
ninagroot/GPT2-705M
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:39:39+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
aishu194/OrpoLlama-3-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:40:05+00:00
null
null
{}
fatmhd1995/tinyllama-tokens_gender-v1
null
[ "region:us" ]
null
2024-04-29T14:40:12+00:00
null
null
{}
plmssr/openthai-mistral-7b-text-to-pandas
null
[ "region:us" ]
null
2024-04-29T14:40:55+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
notresort/christian-lora
null
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:42:35+00:00
null
null
{}
4piken/Llama-3-Gozaru-8B-Instruct-F32.gguf
null
[ "gguf", "region:us" ]
null
2024-04-29T14:42:56+00:00
null
null
{}
siacus/Llama-3-8B-adapt-Q4_K_M-ft.gguf
null
[ "gguf", "region:us" ]
null
2024-04-29T14:43:33+00:00
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "-115.41 +/- 54.83", "name": "mean_reward", "verified": false}]}]}]}
Zan135/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-29T14:43:46+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo MaziyarPanahi/Llama-3-8B-Instruct-64k installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/MaziyarPanahi-Llama-3-8B-Instruct-64k-HQQ-4bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/MaziyarPanahi-Llama-3-8B-Instruct-64k-HQQ-4bit-smashed") tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Llama-3-8B-Instruct-64k") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model MaziyarPanahi/Llama-3-8B-Instruct-64k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "MaziyarPanahi/Llama-3-8B-Instruct-64k"}
PrunaAI/MaziyarPanahi-Llama-3-8B-Instruct-64k-HQQ-4bit-smashed
null
[ "transformers", "llama", "text-generation", "pruna-ai", "conversational", "base_model:MaziyarPanahi/Llama-3-8B-Instruct-64k", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:43:57+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-dpo-lora This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.5894 - Rewards/chosen: -0.2738 - Rewards/rejected: -0.6020 - Rewards/accuracies: 0.7035 - Rewards/margins: 0.3282 - Logps/rejected: -321.6407 - Logps/chosen: -310.1199 - Logits/rejected: -2.7529 - Logits/chosen: -2.7746 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6929 | 0.0262 | 100 | 0.6930 | -0.0001 | -0.0004 | 0.5250 | 0.0003 | -261.4788 | -282.7496 | -2.8388 | -2.8661 | | 0.6923 | 0.0523 | 200 | 0.6923 | 0.0008 | -0.0009 | 0.6050 | 0.0017 | -261.5316 | -282.6624 | -2.8380 | -2.8653 | | 0.6898 | 0.0785 | 300 | 0.6903 | 0.0035 | -0.0024 | 0.6640 | 0.0058 | -261.6760 | -282.3918 | -2.8350 | -2.8623 | | 0.6872 | 0.1047 | 400 | 0.6862 | 0.0165 | 0.0021 | 0.6670 | 0.0144 | -261.2256 | -281.0900 | -2.8308 | -2.8577 | | 0.6783 | 0.1309 | 500 | 0.6804 | 0.0209 | -0.0059 | 0.6835 | 0.0267 | -262.0230 | -280.6481 | -2.8215 | -2.8486 | | 0.6729 | 0.1570 | 600 | 0.6733 | 0.0154 | -0.0272 | 0.6840 | 0.0426 | -264.1608 | -281.1958 | -2.8138 | -2.8410 | | 0.6665 | 0.1832 | 700 | 0.6638 | -0.0035 | -0.0689 | 0.6755 | 0.0654 | -268.3266 | -283.0863 | -2.8060 | -2.8327 | | 0.6427 | 0.2094 | 800 | 0.6546 | -0.0214 | -0.1104 | 0.6815 | 0.0889 | -272.4747 | -284.8825 | -2.8020 | -2.8283 | | 0.6428 | 0.2355 | 900 | 0.6458 | -0.0247 | -0.1383 | 0.6770 | 0.1136 | -275.2685 | -285.2050 | -2.7942 | -2.8199 | | 0.6381 | 0.2617 | 1000 | 0.6358 | -0.0638 | -0.2074 | 0.6785 | 0.1436 | -282.1761 | -289.1206 | -2.7887 | -2.8138 | | 0.6488 | 0.2879 | 1100 | 0.6284 | -0.1378 | -0.3055 | 0.6790 | 0.1677 | -291.9890 | -296.5138 | -2.7826 | -2.8071 | | 0.6427 | 0.3141 | 1200 | 0.6223 | -0.1104 | -0.2986 | 0.6835 | 0.1882 | -291.3028 | -293.7785 | -2.7931 | -2.8165 | | 0.6131 | 0.3402 | 1300 | 0.6172 | -0.1466 | -0.3514 | 0.6865 | 0.2049 | -296.5806 | -297.3945 | -2.7951 | -2.8180 | | 0.6326 | 0.3664 | 1400 | 0.6155 | -0.1752 | -0.3896 | 0.6860 | 0.2144 | -300.3966 | -300.2597 | -2.7920 | -2.8147 | | 0.6128 | 0.3926 | 1500 | 0.6180 | -0.0630 | -0.2687 | 0.6890 | 0.2057 | -288.3090 | -289.0369 | -2.7980 | -2.8198 | | 0.6223 | 0.4187 | 1600 | 0.6088 | -0.1688 | -0.4097 | 0.6945 | 0.2409 | -302.4074 | -299.6220 | -2.7926 | -2.8148 | | 0.6338 | 0.4449 | 1700 | 0.6061 | -0.2152 | -0.4665 | 0.6925 | 0.2513 | -308.0869 | -304.2535 | -2.7961 | -2.8181 | | 0.585 | 0.4711 | 1800 | 0.6050 | -0.1327 | -0.3850 | 0.6915 | 0.2523 | -299.9368 | -296.0054 | -2.7949 | -2.8174 | | 0.577 | 0.4973 | 1900 | 0.6013 | -0.2170 | -0.4883 | 0.6965 | 0.2713 | -310.2670 | -304.4333 | -2.7954 | -2.8176 | | 0.5945 | 0.5234 | 2000 | 0.5992 | -0.2107 | -0.4899 | 0.6995 | 0.2793 | -310.4293 | -303.8028 | -2.7903 | -2.8122 | | 0.5913 | 0.5496 | 2100 | 0.5981 | -0.2373 | -0.5251 | 0.7025 | 0.2879 | -313.9529 | -306.4641 | -2.7863 | -2.8085 | | 0.5816 | 0.5758 | 2200 | 0.5989 | -0.2688 | -0.5570 | 0.6970 | 0.2883 | -317.1411 | -309.6146 | -2.7849 | -2.8070 | | 0.5824 | 0.6019 | 2300 | 0.5961 | -0.2227 | -0.5189 | 0.6955 | 0.2961 | -313.3233 | -305.0098 | -2.7821 | -2.8037 | | 0.602 | 0.6281 | 2400 | 0.5969 | -0.2683 | -0.5669 | 0.6990 | 0.2986 | -318.1251 | -309.5652 | -2.7744 | -2.7961 | | 0.5792 | 0.6543 | 2500 | 0.5963 | -0.2102 | -0.5041 | 0.6975 | 0.2938 | -311.8429 | -303.7615 | -2.7763 | -2.7980 | | 0.6028 | 0.6805 | 2600 | 0.5974 | -0.1896 | -0.4790 | 0.6920 | 0.2895 | -309.3417 | -301.6964 | -2.7717 | -2.7933 | | 0.5854 | 0.7066 | 2700 | 0.5930 | -0.2517 | -0.5615 | 0.7020 | 0.3098 | -317.5864 | -307.9027 | -2.7676 | -2.7892 | | 0.5994 | 0.7328 | 2800 | 0.5920 | -0.2607 | -0.5775 | 0.7045 | 0.3167 | -319.1838 | -308.8107 | -2.7636 | -2.7851 | | 0.5837 | 0.7590 | 2900 | 0.5913 | -0.2540 | -0.5721 | 0.7055 | 0.3181 | -318.6511 | -308.1379 | -2.7619 | -2.7834 | | 0.5858 | 0.7851 | 3000 | 0.5910 | -0.2625 | -0.5835 | 0.7055 | 0.3210 | -319.7853 | -308.9898 | -2.7605 | -2.7819 | | 0.5685 | 0.8113 | 3100 | 0.5914 | -0.2383 | -0.5571 | 0.7040 | 0.3188 | -317.1507 | -306.5707 | -2.7558 | -2.7777 | | 0.5753 | 0.8375 | 3200 | 0.5903 | -0.2623 | -0.5868 | 0.7020 | 0.3246 | -320.1224 | -308.9666 | -2.7567 | -2.7783 | | 0.5769 | 0.8636 | 3300 | 0.5900 | -0.2673 | -0.5934 | 0.7030 | 0.3260 | -320.7757 | -309.4716 | -2.7555 | -2.7771 | | 0.5608 | 0.8898 | 3400 | 0.5896 | -0.2716 | -0.5988 | 0.7020 | 0.3273 | -321.3196 | -309.8930 | -2.7520 | -2.7739 | | 0.6008 | 0.9160 | 3500 | 0.5895 | -0.2716 | -0.5994 | 0.7035 | 0.3277 | -321.3745 | -309.9000 | -2.7539 | -2.7755 | | 0.585 | 0.9422 | 3600 | 0.5895 | -0.2722 | -0.6000 | 0.7020 | 0.3279 | -321.4418 | -309.9531 | -2.7549 | -2.7764 | | 0.567 | 0.9683 | 3700 | 0.5893 | -0.2738 | -0.6022 | 0.7015 | 0.3284 | -321.6555 | -310.1171 | -2.7539 | -2.7755 | | 0.5834 | 0.9945 | 3800 | 0.5893 | -0.2740 | -0.6023 | 0.7025 | 0.3283 | -321.6666 | -310.1333 | -2.7525 | -2.7742 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.0 - Datasets 2.16.1 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "zephyr-7b-dpo-lora", "results": []}]}
SeniorKabanocci/zephyr-7b-dpo-lora
null
[ "peft", "safetensors", "mistral", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "region:us" ]
null
2024-04-29T14:44:08+00:00
null
null
{"license": "unknown"}
mjfan1999/BrantleyGilbert23-24
null
[ "license:unknown", "region:us" ]
null
2024-04-29T14:44:24+00:00
null
null
{}
XuJunHao-TJ/work
null
[ "region:us" ]
null
2024-04-29T14:44:26+00:00
null
null
This hub includes the pth file of the HRINet proposed by the paper ‘An Attention-Based Hemispheric Relation Inference Network for Perinatal Brain Age Prediction’ which is under review now. More details will come soon after the paper is accepted.
{"license": "apache-2.0"}
uais-zll/HRINet
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-29T14:44:32+00:00
reinforcement-learning
transformers
# TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="baek26//tmp/tmpkycce9ub/baek26/all_4814_all_6417_bart-base_rl") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("baek26//tmp/tmpkycce9ub/baek26/all_4814_all_6417_bart-base_rl") model = AutoModelForCausalLMWithValueHead.from_pretrained("baek26//tmp/tmpkycce9ub/baek26/all_4814_all_6417_bart-base_rl") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
{"license": "apache-2.0", "tags": ["trl", "ppo", "transformers", "reinforcement-learning"]}
baek26/all_4814_all_6417_bart-base_rl
null
[ "transformers", "safetensors", "bart", "text2text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:45:05+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo MaziyarPanahi/Llama-3-8B-Instruct-64k installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/MaziyarPanahi-Llama-3-8B-Instruct-64k-HQQ-2bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/MaziyarPanahi-Llama-3-8B-Instruct-64k-HQQ-2bit-smashed") tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Llama-3-8B-Instruct-64k") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model MaziyarPanahi/Llama-3-8B-Instruct-64k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "MaziyarPanahi/Llama-3-8B-Instruct-64k"}
PrunaAI/MaziyarPanahi-Llama-3-8B-Instruct-64k-HQQ-2bit-smashed
null
[ "transformers", "llama", "text-generation", "pruna-ai", "conversational", "base_model:MaziyarPanahi/Llama-3-8B-Instruct-64k", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:45:12+00:00
null
null
{}
hyu8828/YabaL_Mixv7
null
[ "region:us" ]
null
2024-04-29T14:45:23+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HPY_gpt2_vB.3 This model is a fine-tuned version of [ClassCat/gpt2-base-french](https://huggingface.co/ClassCat/gpt2-base-french) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 209 | 1.5171 | | No log | 2.0 | 419 | 1.4984 | | 1.5282 | 3.0 | 629 | 1.4881 | | 1.5282 | 3.98 | 836 | 1.4858 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
{"license": "cc-by-sa-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "HPY_gpt2_vB.3", "results": []}]}
azizkt/HPY_gpt2_vB.3
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:45:23+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo MaziyarPanahi/Llama-3-8B-Instruct-64k installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/MaziyarPanahi-Llama-3-8B-Instruct-64k-HQQ-1bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/MaziyarPanahi-Llama-3-8B-Instruct-64k-HQQ-1bit-smashed") tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Llama-3-8B-Instruct-64k") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model MaziyarPanahi/Llama-3-8B-Instruct-64k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "MaziyarPanahi/Llama-3-8B-Instruct-64k"}
PrunaAI/MaziyarPanahi-Llama-3-8B-Instruct-64k-HQQ-1bit-smashed
null
[ "transformers", "llama", "text-generation", "pruna-ai", "conversational", "base_model:MaziyarPanahi/Llama-3-8B-Instruct-64k", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:45:31+00:00
image-to-text
transformers
# LongCap: Finetuned [BLIP](https://huggingface.co/Salesforce/blip-image-captioning-base) for generating long captions of images, suitable for prompts for text-to-image generation and captioning text-to-image datasets ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("unography/blip-long-cap") model = BlipForConditionalGeneration.from_pretrained("unography/blip-long-cap") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt") pixel_values = inputs.pixel_values out = model.generate(pixel_values=pixel_values, max_length=250, num_beams=3, repetition_penalty=2.5) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on a sandy beach, interacting with a dog wearing a blue and white checkered shirt. the background is an ocean or sea with waves crashing in the distance. there are no other animals or people visible in the image. ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("unography/blip-large-long-cap") model = BlipForConditionalGeneration.from_pretrained("unography/blip-large-long-cap").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values out = model.generate(pixel_values=pixel_values, max_length=250, num_beams=3, repetition_penalty=2.5) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on a sandy beach, interacting with a dog wearing a blue and white checkered shirt. the background is an ocean or sea with waves crashing in the distance. there are no other animals or people visible in the image. ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("unography/blip-large-long-cap") model = BlipForConditionalGeneration.from_pretrained("unography/blip-large-long-cap", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) pixel_values = inputs.pixel_values out = model.generate(pixel_values=pixel_values, max_length=250, num_beams=3, repetition_penalty=2.5) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on a sandy beach, interacting with a dog wearing a blue and white checkered shirt. the background is an ocean or sea with waves crashing in the distance. there are no other animals or people visible in the image. ``` </details>
{"license": "bsd-3-clause", "tags": ["image-captioning"], "datasets": ["unography/laion-81k-GPT4V-LIVIS-Captions"], "pipeline_tag": "image-to-text", "languages": ["en"], "widget": [{"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg", "example_title": "Savanna"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg", "example_title": "Football Match"}, {"src": "https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg", "example_title": "Airport"}], "inference": {"parameters": {"max_length": 250, "num_beams": 3, "repetition_penalty": 2.5}}}
unography/blip-long-cap
null
[ "transformers", "safetensors", "blip", "text2text-generation", "image-captioning", "image-to-text", "dataset:unography/laion-81k-GPT4V-LIVIS-Captions", "license:bsd-3-clause", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-29T14:46:04+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tulu2-7b-cost-UF-UI-HHRLHF-5e-6 This model is a fine-tuned version of [allenai/tulu-2-7b](https://huggingface.co/allenai/tulu-2-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8630 - Rewards/chosen: -4.9803 - Rewards/rejected: -5.7374 - Rewards/accuracies: 0.5905 - Rewards/margins: 0.7571 - Rewards/margins Max: 5.4488 - Rewards/margins Min: -2.7483 - Rewards/margins Std: 2.6664 - Logps/rejected: -892.1482 - Logps/chosen: -835.0510 - Logits/rejected: 1.2553 - Logits/chosen: 1.0857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0556 | 1.0 | 3974 | 0.8630 | -4.9803 | -5.7374 | 0.5905 | 0.7571 | 5.4488 | -2.7483 | 2.6664 | -892.1482 | -835.0510 | 1.2553 | 1.0857 | ### Framework versions - PEFT 0.7.1 - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "allenai/tulu-2-7b", "model-index": [{"name": "tulu2-7b-cost-UF-UI-HHRLHF-5e-6", "results": []}]}
just1nseo/tulu2-7b-cost-UF-UI-HHRLHF-5e-6
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:allenai/tulu-2-7b", "region:us" ]
null
2024-04-29T14:47:46+00:00
null
null
{}
MLP-Lemma/lemma-pt-ckpt-3000
null
[ "region:us" ]
null
2024-04-29T14:48:01+00:00
text-generation
transformers
- **Developed by:** cstr - **License:** apache-2.0 - **Finetuned from model :** vonjack/Phi-3-mini-4k-instruct-LLaMAfied This is a quick experiment with only 150 orpo steps from a german dataset. This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en", "de"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "orpo"], "base_model": "vonjack/Phi-3-mini-4k-instruct-LLaMAfied"}
cstr/phi-3-orpo-v8_16
null
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "orpo", "conversational", "en", "de", "base_model:vonjack/Phi-3-mini-4k-instruct-LLaMAfied", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:48:25+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo gradientai/Llama-3-8B-Instruct-262k installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/gradientai-Llama-3-8B-Instruct-262k-HQQ-1bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/gradientai-Llama-3-8B-Instruct-262k-HQQ-1bit-smashed") tokenizer = AutoTokenizer.from_pretrained("gradientai/Llama-3-8B-Instruct-262k") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model gradientai/Llama-3-8B-Instruct-262k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "gradientai/Llama-3-8B-Instruct-262k"}
PrunaAI/gradientai-Llama-3-8B-Instruct-262k-HQQ-1bit-smashed
null
[ "transformers", "llama", "text-generation", "pruna-ai", "conversational", "base_model:gradientai/Llama-3-8B-Instruct-262k", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:48:26+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tulu2-7b-cost-UF-UI-HHRLHF-2e-6 This model is a fine-tuned version of [allenai/tulu-2-7b](https://huggingface.co/allenai/tulu-2-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7862 - Rewards/chosen: -2.7944 - Rewards/rejected: -3.1878 - Rewards/accuracies: 0.5755 - Rewards/margins: 0.3934 - Rewards/margins Max: 3.4649 - Rewards/margins Min: -1.8768 - Rewards/margins Std: 1.7287 - Logps/rejected: -637.1887 - Logps/chosen: -616.4665 - Logits/rejected: 1.0155 - Logits/chosen: 0.8493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.198 | 1.0 | 3974 | 0.7862 | -2.7944 | -3.1878 | 0.5755 | 0.3934 | 3.4649 | -1.8768 | 1.7287 | -637.1887 | -616.4665 | 1.0155 | 0.8493 | ### Framework versions - PEFT 0.7.1 - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "allenai/tulu-2-7b", "model-index": [{"name": "tulu2-7b-cost-UF-UI-HHRLHF-2e-6", "results": []}]}
just1nseo/tulu2-7b-cost-UF-UI-HHRLHF-2e-6
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:allenai/tulu-2-7b", "region:us" ]
null
2024-04-29T14:48:57+00:00
null
null
{}
WangA/bert-base-finetuned-jd
null
[ "region:us" ]
null
2024-04-29T14:48:58+00:00
null
null
{}
itay-nakash/model_f06f2c3f16
null
[ "region:us" ]
null
2024-04-29T14:49:01+00:00
null
null
{}
corenet-community/imagenet-1k-224x224-vit-base
null
[ "region:us" ]
null
2024-04-29T14:49:36+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
NikiBase/t5-large_PREFIX_TUNING_SEQ2SEQ
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:50:23+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tulu2-7b-cost-UF-UI-2e-6 This model is a fine-tuned version of [allenai/tulu-2-7b](https://huggingface.co/allenai/tulu-2-7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7594 - Rewards/chosen: -2.0999 - Rewards/rejected: -2.4320 - Rewards/accuracies: 0.5525 - Rewards/margins: 0.3320 - Rewards/margins Max: 2.8947 - Rewards/margins Min: -1.5016 - Rewards/margins Std: 1.4200 - Logps/rejected: -562.2834 - Logps/chosen: -548.3036 - Logits/rejected: 0.9632 - Logits/chosen: 0.7759 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.2338 | 1.0 | 2428 | 0.7594 | -2.0999 | -2.4320 | 0.5525 | 0.3320 | 2.8947 | -1.5016 | 1.4200 | -562.2834 | -548.3036 | 0.9632 | 0.7759 | ### Framework versions - PEFT 0.7.1 - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "allenai/tulu-2-7b", "model-index": [{"name": "tulu2-7b-cost-UF-UI-2e-6", "results": []}]}
just1nseo/tulu2-7b-cost-UF-UI-2e-6
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:allenai/tulu-2-7b", "region:us" ]
null
2024-04-29T14:50:33+00:00
null
null
{"license": "apache-2.0"}
ktrin4/tcc
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-29T14:50:37+00:00
image-to-image
null
This model is a ControlNet conditioned over abstract images represented by geometric shapes (dataset [here](https://www.kaggle.com/datasets/rishabhsrivastava66/images-made-up-of-geometric-shapes-controlnet/data)).
{"language": ["en"], "pipeline_tag": "image-to-image"}
rishabhs66/ControlNet-Conditioned-On-Geometric-Shapes
null
[ "image-to-image", "en", "region:us" ]
null
2024-04-29T14:51:19+00:00
null
null
{}
itay-nakash/model_28f6bfc33c
null
[ "region:us" ]
null
2024-04-29T14:51:19+00:00
null
null
{}
corenet-community/imagenet-1k-224x224-vit-large
null
[ "region:us" ]
null
2024-04-29T14:51:46+00:00
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "249.45 +/- 18.76", "name": "mean_reward", "verified": false}]}]}]}
Krazeder/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-29T14:51:49+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo gradientai/Llama-3-8B-Instruct-262k installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/gradientai-Llama-3-8B-Instruct-262k-HQQ-4bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/gradientai-Llama-3-8B-Instruct-262k-HQQ-4bit-smashed") tokenizer = AutoTokenizer.from_pretrained("gradientai/Llama-3-8B-Instruct-262k") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model gradientai/Llama-3-8B-Instruct-262k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "gradientai/Llama-3-8B-Instruct-262k"}
PrunaAI/gradientai-Llama-3-8B-Instruct-262k-HQQ-4bit-smashed
null
[ "transformers", "llama", "text-generation", "pruna-ai", "conversational", "base_model:gradientai/Llama-3-8B-Instruct-262k", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:53:06+00:00
text-classification
transformers
# Model Trained Using AutoTrain - Problem type: Text Regression ## Validation Metrics loss: 0.282262921333313 mse: 0.2820460796356201 mae: 0.4189736545085907 r2: 0.74436353679844 rmse: 0.5310801267623901 explained_variance: 0.7570163011550903
{"tags": ["autotrain", "text-regression"], "datasets": ["autotrain-m96nh-snymb/autotrain-data"], "widget": [{"text": "I love AutoTrain"}]}
abhishek/autotrain-m96nh-snymb
null
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "autotrain", "text-regression", "dataset:autotrain-m96nh-snymb/autotrain-data", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:53:10+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo gradientai/Llama-3-8B-Instruct-262k installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/gradientai-Llama-3-8B-Instruct-262k-HQQ-2bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/gradientai-Llama-3-8B-Instruct-262k-HQQ-2bit-smashed") tokenizer = AutoTokenizer.from_pretrained("gradientai/Llama-3-8B-Instruct-262k") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model gradientai/Llama-3-8B-Instruct-262k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "gradientai/Llama-3-8B-Instruct-262k"}
PrunaAI/gradientai-Llama-3-8B-Instruct-262k-HQQ-2bit-smashed
null
[ "transformers", "llama", "text-generation", "pruna-ai", "conversational", "base_model:gradientai/Llama-3-8B-Instruct-262k", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:53:14+00:00
null
transformers
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/tlphams/Wizard-Mixtral-8x22B-Instruct-v0.1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 29.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 32.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 42.1 | | | [GGUF](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 42.7 | | | [GGUF](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 46.8 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q2_K.gguf.part2of2) | i1-Q2_K | 52.2 | IQ3_XXS probably better | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ3_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ3_XXS.gguf.part2of2) | i1-IQ3_XXS | 55.0 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 58.3 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 61.6 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 61.6 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 64.6 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 67.9 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 72.7 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 75.6 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 80.0 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 80.6 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 85.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 97.1 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q5_K_M.gguf.part3of3) | i1-Q5_K_M | 100.1 | | | [PART 1](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF/resolve/main/Wizard-Mixtral-8x22B-Instruct-v0.1.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 115.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "tlphams/Wizard-Mixtral-8x22B-Instruct-v0.1", "quantized_by": "mradermacher"}
mradermacher/Wizard-Mixtral-8x22B-Instruct-v0.1-i1-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:tlphams/Wizard-Mixtral-8x22B-Instruct-v0.1", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:53:40+00:00
automatic-speech-recognition
transformers
# Kotoba-Whisper-v1.1 _Kotoba-Whisper-v1.1_ is a Japanese ASR model based on [kotoba-tech/kotoba-whisper-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0), with additional postprocessing stacks integrated as [`pipeline`](https://huggingface.co/docs/transformers/en/main_classes/pipelines). The new features includes (i) improved timestamp achieved by [stable-ts](https://github.com/jianfch/stable-ts) and (ii) adding punctuation with [punctuators](https://github.com/1-800-BAD-CODE/punctuators/tree/main). These libraries are merged into Kotoba-Whisper-v1.1 via pipeline and will be applied seamlessly to the predicted transcription from [kotoba-tech/kotoba-whisper-v1.0](https://huggingface.co/kotoba-tech/kotoba-whisper-v1.0). The pipeline has been developed through the collaboration between [Asahi Ushio](https://asahiushio.com) and [Kotoba Technologies](https://twitter.com/kotoba_tech) Following table presents the raw CER (unlike usual CER where the punctuations are removed before computing the metrics). | model | CommonVoice 8.0 (Japanese) | JSUT Basic 5000 | ReazonSpeech Test | |:--------------------------------|---------------------------------------:|-------------------------------------:|----------------------------------------:| | kotoba-tech/kotoba-whisper-v1.0 | 17.8 | 15.2 | 17.8 | | kotoba-tech/kotoba-whisper-v1.1 | 16 | 11.6 | 18.5 | | openai/whisper-large-v3 | 15.4 | 13.6 | 20.7 | ## Transformers Usage Kotoba-Whisper-v1.1 is supported in the Hugging Face 🤗 Transformers library from version 4.39 onwards. To run the model, first install the latest version of Transformers. ```bash pip install --upgrade pip pip install --upgrade transformers accelerate torchaudio pip install stable-ts==2.16.0 pip install punctuators==0.0.5 ``` ### Transcription The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class to transcribe audio files as follows: ```python import torch from transformers import pipeline from datasets import load_dataset # config model_id = "kotoba-tech/kotoba-whisper-v1.1" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 device = "cuda:0" if torch.cuda.is_available() else "cpu" model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {} generate_kwargs = {"language": "japanese", "task": "transcribe"} # load model pipe = pipeline( model=model_id, torch_dtype=torch_dtype, device=device, model_kwargs=model_kwargs, chunk_length_s=15, batch_size=16, trust_remote_code=True ) # load sample audio dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test") sample = dataset[0]["audio"] # run inference result = pipe(sample, return_timestamps=True, generate_kwargs=generate_kwargs) print(result) ``` - To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline: ```diff - result = pipe(sample, return_timestamps=True, generate_kwargs=generate_kwargs) + result = pipe("audio.mp3", return_timestamps=True, generate_kwargs=generate_kwargs) ``` ### Transcription with Prompt Kotoba-whisper can generate transcription with prompting as below: ```python import re import torch from transformers import pipeline from datasets import load_dataset # config model_id = "kotoba-tech/kotoba-whisper-v1.1" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 device = "cuda:0" if torch.cuda.is_available() else "cpu" model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {} generate_kwargs = {"language": "japanese", "task": "transcribe"} # load model pipe = pipeline( model=model_id, torch_dtype=torch_dtype, device=device, model_kwargs=model_kwargs, chunk_length_s=15, batch_size=16, trust_remote_code=True ) # load sample audio dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test") # --- Without prompt --- text = pipe(dataset[10]["audio"], generate_kwargs=generate_kwargs)['text'] print(text) # 81歳、力強い走りに変わってきます。 # --- With prompt ---: Let's change `81` to `91`. prompt = "91歳" generate_kwargs['prompt_ids'] = pipe.tokenizer.get_prompt_ids(prompt, return_tensors="pt").to(device) text = pipe(dataset[10]["audio"], generate_kwargs=generate_kwargs)['text'] # currently the pipeline for ASR appends the prompt at the beginning of the transcription, so remove it text = re.sub(rf"\A\s*{prompt}\s*", "", text) print(text) # あっぶったでもスルガさん、91歳、力強い走りに変わってきます。 ``` ### Flash Attention 2 We recommend using [Flash-Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2) if your GPU allows for it. To do so, you first need to install [Flash Attention](https://github.com/Dao-AILab/flash-attention): ``` pip install flash-attn --no-build-isolation ``` Then pass `attn_implementation="flash_attention_2"` to `from_pretrained`: ```diff - model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {} + model_kwargs = {"attn_implementation": "flash_attention_2"} if torch.cuda.is_available() else {} ``` ## Acknowledgements * [OpenAI](https://openai.com/) for the Whisper [model](https://huggingface.co/openai/whisper-large-v3). * Hugging Face 🤗 [Transformers](https://github.com/huggingface/transformers) for the model integration. * Hugging Face 🤗 for the [Distil-Whisper codebase](https://github.com/huggingface/distil-whisper). * [Reazon Human Interaction Lab](https://research.reazon.jp/) for the [ReazonSpeech dataset](https://huggingface.co/datasets/reazon-research/reazonspeech).
{"language": "ja", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "metrics": ["wer"], "widget": [{"example_title": "CommonVoice 8.0 (Test Split)", "src": "https://huggingface.co/datasets/japanese-asr/ja_asr.common_voice_8_0/resolve/main/sample.flac"}, {"example_title": "JSUT Basic 5000", "src": "https://huggingface.co/datasets/japanese-asr/ja_asr.jsut_basic5000/resolve/main/sample.flac"}, {"example_title": "ReazonSpeech (Test Split)", "src": "https://huggingface.co/datasets/japanese-asr/ja_asr.reazonspeech_test/resolve/main/sample.flac"}], "pipeline_tag": "automatic-speech-recognition", "model-index": [{"name": "kotoba-tech/kotoba-whisper-v1.0", "results": [{"task": {"type": "automatic-speech-recognition"}, "dataset": {"name": "CommonVoice_8.0 (Japanese)", "type": "japanese-asr/ja_asr.common_voice_8_0"}, "metrics": [{"type": "WER", "value": 59.27, "name": "WER"}, {"type": "CER", "value": 9.44, "name": "CER"}]}, {"task": {"type": "automatic-speech-recognition"}, "dataset": {"name": "ReazonSpeech (Test)", "type": "japanese-asr/ja_asr.reazonspeech_test"}, "metrics": [{"type": "WER", "value": 56.62, "name": "WER"}, {"type": "CER", "value": 12.6, "name": "CER"}]}, {"task": {"type": "automatic-speech-recognition"}, "dataset": {"name": "JSUT Basic5000", "type": "japanese-asr/ja_asr.jsut_basic5000"}, "metrics": [{"type": "WER", "value": 64.36, "name": "WER"}, {"type": "CER", "value": 8.48, "name": "CER"}]}]}]}
kotoba-tech/kotoba-whisper-v1.1
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "audio", "hf-asr-leaderboard", "ja", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-29T14:53:45+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "vonjack/Phi-3-mini-4k-instruct-LLaMAfied"}
rwitz/phi-rp-dpo
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:vonjack/Phi-3-mini-4k-instruct-LLaMAfied", "region:us" ]
null
2024-04-29T14:53:57+00:00
null
null
{}
corenet-community/imagenet-1k-224x224-vit-huge
null
[ "region:us" ]
null
2024-04-29T14:54:30+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/0mfv37i
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:54:37+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Intent-classification-12kv2 This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0074 - Accuracy: 0.9984 - F1: 0.9983 - Precision: 0.9983 - Recall: 0.9983 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.742 | 0.05 | 10 | 1.4822 | 0.6954 | 0.6918 | 0.7288 | 0.6966 | | 1.2849 | 0.11 | 20 | 0.9533 | 0.8713 | 0.8699 | 0.8899 | 0.8729 | | 0.8226 | 0.16 | 30 | 0.5235 | 0.9786 | 0.9786 | 0.9790 | 0.9785 | | 0.399 | 0.21 | 40 | 0.2295 | 0.9812 | 0.9812 | 0.9811 | 0.9817 | | 0.1871 | 0.26 | 50 | 0.1168 | 0.9839 | 0.9839 | 0.9844 | 0.9836 | | 0.0855 | 0.32 | 60 | 0.0508 | 0.9928 | 0.9928 | 0.9928 | 0.9928 | | 0.0546 | 0.37 | 70 | 0.0300 | 0.9947 | 0.9947 | 0.9948 | 0.9947 | | 0.0226 | 0.42 | 80 | 0.0271 | 0.9947 | 0.9948 | 0.9947 | 0.9948 | | 0.0306 | 0.47 | 90 | 0.0416 | 0.9888 | 0.9887 | 0.9894 | 0.9883 | | 0.0336 | 0.53 | 100 | 0.0157 | 0.9970 | 0.9970 | 0.9970 | 0.9971 | | 0.0373 | 0.58 | 110 | 0.0214 | 0.9951 | 0.9951 | 0.9952 | 0.9951 | | 0.0094 | 0.63 | 120 | 0.0121 | 0.9970 | 0.9971 | 0.9971 | 0.9970 | | 0.0077 | 0.68 | 130 | 0.0094 | 0.9980 | 0.9980 | 0.9980 | 0.9981 | | 0.0253 | 0.74 | 140 | 0.0077 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | | 0.0233 | 0.79 | 150 | 0.0075 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | | 0.0068 | 0.84 | 160 | 0.0080 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | | 0.0286 | 0.89 | 170 | 0.0141 | 0.9964 | 0.9964 | 0.9964 | 0.9964 | | 0.0139 | 0.95 | 180 | 0.0104 | 0.9970 | 0.9970 | 0.9970 | 0.9971 | | 0.0043 | 1.0 | 190 | 0.0074 | 0.9977 | 0.9977 | 0.9977 | 0.9976 | | 0.0122 | 1.05 | 200 | 0.0065 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | | 0.0071 | 1.11 | 210 | 0.0059 | 0.9980 | 0.9980 | 0.9981 | 0.9980 | | 0.0025 | 1.16 | 220 | 0.0083 | 0.9984 | 0.9984 | 0.9984 | 0.9983 | | 0.0232 | 1.21 | 230 | 0.0057 | 0.9984 | 0.9984 | 0.9984 | 0.9984 | | 0.0035 | 1.26 | 240 | 0.0056 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | | 0.0246 | 1.32 | 250 | 0.0053 | 0.9984 | 0.9984 | 0.9984 | 0.9983 | | 0.0023 | 1.37 | 260 | 0.0063 | 0.9980 | 0.9980 | 0.9981 | 0.9980 | | 0.0021 | 1.42 | 270 | 0.0048 | 0.9984 | 0.9984 | 0.9984 | 0.9983 | | 0.002 | 1.47 | 280 | 0.0028 | 0.9997 | 0.9997 | 0.9997 | 0.9997 | | 0.022 | 1.53 | 290 | 0.0023 | 0.9997 | 0.9997 | 0.9997 | 0.9997 | | 0.0135 | 1.58 | 300 | 0.0046 | 0.9987 | 0.9987 | 0.9987 | 0.9987 | | 0.0026 | 1.63 | 310 | 0.0082 | 0.9977 | 0.9977 | 0.9979 | 0.9976 | | 0.0019 | 1.68 | 320 | 0.0043 | 0.9990 | 0.9990 | 0.9991 | 0.9990 | | 0.0017 | 1.74 | 330 | 0.0035 | 0.9993 | 0.9994 | 0.9994 | 0.9994 | | 0.0019 | 1.79 | 340 | 0.0015 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0014 | 1.84 | 350 | 0.0013 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0014 | 1.89 | 360 | 0.0013 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0013 | 1.95 | 370 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0013 | 2.0 | 380 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0012 | 2.05 | 390 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0011 | 2.11 | 400 | 0.0011 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0011 | 2.16 | 410 | 0.0010 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0011 | 2.21 | 420 | 0.0010 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0014 | 2.26 | 430 | 0.0009 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.001 | 2.32 | 440 | 0.0009 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.001 | 2.37 | 450 | 0.0009 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0009 | 2.42 | 460 | 0.0009 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0009 | 2.47 | 470 | 0.0008 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0009 | 2.53 | 480 | 0.0008 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0009 | 2.58 | 490 | 0.0008 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0009 | 2.63 | 500 | 0.0008 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0008 | 2.68 | 510 | 0.0008 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0008 | 2.74 | 520 | 0.0008 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0008 | 2.79 | 530 | 0.0007 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0008 | 2.84 | 540 | 0.0007 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0008 | 2.89 | 550 | 0.0007 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0008 | 2.95 | 560 | 0.0007 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0007 | 3.0 | 570 | 0.0007 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0009 | 3.05 | 580 | 0.0007 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0007 | 3.11 | 590 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0007 | 3.16 | 600 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0007 | 3.21 | 610 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0007 | 3.26 | 620 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0007 | 3.32 | 630 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0007 | 3.37 | 640 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 3.42 | 650 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 3.47 | 660 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 3.53 | 670 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 3.58 | 680 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 3.63 | 690 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 3.68 | 700 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 3.74 | 710 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 3.79 | 720 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 3.84 | 730 | 0.0006 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 3.89 | 740 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 3.95 | 750 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.0 | 760 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.05 | 770 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.11 | 780 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.16 | 790 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.21 | 800 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.26 | 810 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.32 | 820 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.37 | 830 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.42 | 840 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.47 | 850 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.53 | 860 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.58 | 870 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0005 | 4.63 | 880 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.68 | 890 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0005 | 4.74 | 900 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0005 | 4.79 | 910 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.84 | 920 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0005 | 4.89 | 930 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0006 | 4.95 | 940 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | | 0.0005 | 5.0 | 950 | 0.0005 | 1.0 | 1.0 | 1.0 | 1.0 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "Intent-classification-12kv2", "results": []}]}
Narkantak/Intent-classification-12kv2
null
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T14:55:05+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo cognitivecomputations/dolphin-2.9-llama3-8b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/cognitivecomputations-dolphin-2.9-llama3-8b-HQQ-2bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/cognitivecomputations-dolphin-2.9-llama3-8b-HQQ-2bit-smashed") tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/dolphin-2.9-llama3-8b") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-2.9-llama3-8b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "cognitivecomputations/dolphin-2.9-llama3-8b"}
PrunaAI/cognitivecomputations-dolphin-2.9-llama3-8b-HQQ-2bit-smashed
null
[ "transformers", "llama", "text-generation", "pruna-ai", "conversational", "base_model:cognitivecomputations/dolphin-2.9-llama3-8b", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:55:27+00:00
null
null
{}
yanglll/gpt-mini-133M
null
[ "region:us" ]
null
2024-04-29T14:55:50+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo cognitivecomputations/dolphin-2.9-llama3-8b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/cognitivecomputations-dolphin-2.9-llama3-8b-HQQ-1bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/cognitivecomputations-dolphin-2.9-llama3-8b-HQQ-1bit-smashed") tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/dolphin-2.9-llama3-8b") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model cognitivecomputations/dolphin-2.9-llama3-8b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "cognitivecomputations/dolphin-2.9-llama3-8b"}
PrunaAI/cognitivecomputations-dolphin-2.9-llama3-8b-HQQ-1bit-smashed
null
[ "transformers", "llama", "text-generation", "pruna-ai", "conversational", "base_model:cognitivecomputations/dolphin-2.9-llama3-8b", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:56:04+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
YernazarBis/llama-3-8b-tr-ft-mr
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-29T14:56:36+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-gtzan-v2 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.5042 - Accuracy: 0.87 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.9411 | 1.0 | 113 | 1.8991 | 0.48 | | 1.1479 | 2.0 | 226 | 1.4266 | 0.55 | | 1.1104 | 3.0 | 339 | 0.9525 | 0.71 | | 0.7571 | 4.0 | 452 | 1.1713 | 0.65 | | 0.6203 | 5.0 | 565 | 0.8307 | 0.76 | | 0.5817 | 6.0 | 678 | 0.6269 | 0.84 | | 0.3863 | 7.0 | 791 | 0.5911 | 0.85 | | 0.1104 | 8.0 | 904 | 0.5373 | 0.86 | | 0.2236 | 9.0 | 1017 | 0.4841 | 0.88 | | 0.0707 | 10.0 | 1130 | 0.5042 | 0.87 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["marsyas/gtzan"], "metrics": ["accuracy"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "wav2vec2-base-finetuned-gtzan-v2", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "GTZAN", "type": "marsyas/gtzan", "config": "all", "split": "train", "args": "all"}, "metrics": [{"type": "accuracy", "value": 0.87, "name": "Accuracy"}]}]}]}
heisenberg3376/wav2vec2-base-finetuned-gtzan
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us", "has_space" ]
null
2024-04-29T14:56:57+00:00
null
null
{}
corenet-community/imagenet-1k-512x512-vit-base
null
[ "region:us" ]
null
2024-04-29T14:57:18+00:00
null
null
{}
corenet-community/imagenet-1k-512x512-vit-large
null
[ "region:us" ]
null
2024-04-29T14:57:56+00:00
null
null
{}
LAKSHM11-G/pegasus-x-base-pegasus_article_summarization_base4
null
[ "region:us" ]
null
2024-04-29T14:58:37+00:00
null
null
{}
vm24bho/net_firewall_dfm
null
[ "region:us" ]
null
2024-04-29T14:58:41+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama_train_seq_cls_run1 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "llama_train_seq_cls_run1", "results": []}]}
isaaclee/llama_train_seq_cls_run1
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-04-29T14:59:38+00:00
null
null
{}
corenet-community/imagenet-1k-512x512-vit-huge
null
[ "region:us" ]
null
2024-04-29T14:59:51+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
quickstep3621/hq7hkl2
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T15:00:10+00:00
token-classification
transformers
{}
PurCL/codeart-26m-ti-O3
null
[ "transformers", "pytorch", "codeart", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-29T15:00:14+00:00