modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-26 18:27:55
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
499 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-26 18:27:32
card
stringlengths
11
1.01M
PrunaAI/seresnext50_32x4d.gluon_in1k-turbo-green-smashed
PrunaAI
2024-11-13T13:19:00Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-14T09:55:46Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir seresnext50_32x4d.gluon_in1k-turbo-green-smashed huggingface-cli download PrunaAI/seresnext50_32x4d.gluon_in1k-turbo-green-smashed --local-dir seresnext50_32x4d.gluon_in1k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "seresnext50_32x4d.gluon_in1k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "seresnext50_32x4d.gluon_in1k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model seresnext50_32x4d.gluon_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/resnetv2_50d_evos.ah_in1k-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:18:57Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-10T09:47:24Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir resnetv2_50d_evos.ah_in1k-turbo-tiny-green-smashed huggingface-cli download PrunaAI/resnetv2_50d_evos.ah_in1k-turbo-tiny-green-smashed --local-dir resnetv2_50d_evos.ah_in1k-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "resnetv2_50d_evos.ah_in1k-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "resnetv2_50d_evos.ah_in1k-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model resnetv2_50d_evos.ah_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/convnext_large_mlp.clip_laion2b_augreg_ft_in1k-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:18:56Z
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-07T16:59:41Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir convnext_large_mlp.clip_laion2b_augreg_ft_in1k-turbo-tiny-green-smashed huggingface-cli download PrunaAI/convnext_large_mlp.clip_laion2b_augreg_ft_in1k-turbo-tiny-green-smashed --local-dir convnext_large_mlp.clip_laion2b_augreg_ft_in1k-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "convnext_large_mlp.clip_laion2b_augreg_ft_in1k-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "convnext_large_mlp.clip_laion2b_augreg_ft_in1k-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model convnext_large_mlp.clip_laion2b_augreg_ft_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
seongil-dn/gte-further-filtered-neg5
seongil-dn
2024-11-13T13:18:56Z
7
0
sentence-transformers
[ "sentence-transformers", "safetensors", "new", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:24811", "loss:MultipleNegativesRankingLoss", "custom_code", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:Alibaba-NLP/gte-multilingual-base", "base_model:finetune:Alibaba-NLP/gte-multilingual-base", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-11-13T13:18:23Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:24811 - loss:MultipleNegativesRankingLoss base_model: Alibaba-NLP/gte-multilingual-base widget: - source_sentence: ๋ฏผ๋ฌผ๊ณผ ๋ฐ”๋‹ท๋ฌผ์—์„œ ์ž๋ผ๋Š” ์ดˆ๋ณธ ์‹๋ฌผ์€ ์–ด๋–ค ์ข…๋ฅ˜๊ฐ€ ์žˆ๋‚˜์š”? sentences: - ์ดˆ์‹ ์„ฑ ํ•ตํ•ฉ์„ฑ์—์„œ, R-๊ณผ์ •์ด ์ดˆ์‹ ์„ฑ ๋‚ด๋ถ€์—์„œ ํ•ต์œตํ•ฉ์„ ์ผ์œผํ‚ค๋Š” ์›์ธ์ด๋‹ค. R-๊ณผ์ •์€ ์›์†Œ์˜ ์ค‘์„ฑ์ž ํฌํš ๊ณผ์ •์œผ๋กœ ๋†’์€ ์˜จ๋„์—์„œ ๋†’์€ ๋ฐ€๋„์˜ ์ค‘์„ฑ์ž ์„ ์†์ด ์กด์žฌํ•  ๋•Œ ๋ฐœ์ƒํ•œ๋‹ค. R-๊ณผ์ •์—์„œ ์›์žํ•ต์€ ๋†’์€ ์ค‘์„ฑ์ž ์„ ์†์— ๋…ธ์ถœ๋˜๋ฉฐ, ๋ถˆ์•ˆ์ •ํ•  ์ •๋„๋กœ ์ค‘์„ฑ์ž๊ฐ€ ๋งŽ์€ ์›์žํ•ต์„ ๊ตฌ์„ฑํ•˜๋Š”๋ฐ, ์ด๋Š” ๊ณง ์•ˆ์ •๋œ ์ˆ˜์ค€์˜ ์ค‘์„ฑ์ž๋ฅผ ๊ฐ€์ง€๋Š” ์›์žํ•ต์œผ๋กœ ๋ถ•๊ดดํ•œ๋‹ค. ์ค‘์„ฑ์ž ์„ ์†์€ ๊ทน๋„๋กœ ๋†’์•„ ๋งค ์ดˆ ๋‹จ์œ„ ์„ผํ‹ฐ๋ฏธํ„ฐ๋‹น 10์ •๋„๋‚˜ ๋œ๋‹ค. ๋‹ค๋ฅธ ํ•ตํ•ฉ์„ฑ ๊ณผ์ •์œผ๋กœ๋Š” P-๊ณผ์ • ๋ฐ S-๊ณผ์ •์ด ์žˆ์œผ๋ฉฐ, S-๊ณผ์ •์€ ํ•ญ์„ฑ ํ•ตํ•ฉ์„ฑ์—์„œ ๋‚˜ํƒ€๋‚˜๋Š” ๋ฐฉ์‹์ด๋‹ค. - ๋ฏผ๋ฌผ ๋˜๋Š” ๋ฐ”๋‹ท๋ฌผ ์†์—์„œ ์ž๋ผ๋Š” ์ดˆ๋ณธ์œผ๋กœ์„œ, ์„ธ๊ณ„์˜ ์—ด๋Œ€์™€ ์˜จ๋Œ€์— ๋„๋ฆฌ ๋ถ„ํฌํ•˜๊ณ  ์žˆ์œผ๋ฉฐ ์•ฝ 15์†์˜ 100์ข… ๊ฐ€๋Ÿ‰์ด ์•Œ๋ ค์ ธ ์žˆ๋‹ค. ํ•œ๊ตญ์—๋Š” ์ž๋ผํ’€ยท๋ฌผ์งˆ๊ฒฝ์ด ๋“ฑ์˜ 5์† 5์ข…์ด ๋ถ„ํฌํ•˜๊ณ  ์žˆ๋‹ค. ๊ฝƒ์€ ์ทจ์‚ฐ๊ฝƒ์ฐจ๋ก€๊ฐ€ ํ‡ดํ™”๋œ ๋ชจ์–‘์œผ๋กœ ๋‹ฌ๋ฆฌ๋Š”๋ฐ, ๊ฝƒ์ฐจ๋ก€๋Š” ์•„๋žซ๋ถ€๋ถ„์ด ํ†ต ๋ชจ์–‘์œผ๋กœ ํ•ฉ์ณ์ง„ 2๊ฐœ์˜ ํฌ์ดˆ(ๅŒ…่พจ)๋กœ ๋‘˜๋Ÿฌ์‹ธ์—ฌ ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ์•”๊ฝƒ ๋˜๋Š” ์–‘์„ฑํ™”์—์„œ๋Š” 2๊ฐœ ์ค‘์—์„œ 1๊ฐœ๋งŒ์ด ๋ฐœ๋‹ฌํ•˜๋ฉฐ ๋‹ค๋ฅธ ๊ฒƒ์€ ํ‡ดํ™”๋˜์–ด ์žˆ๋‹ค. ์ˆ˜๊ฝƒ์€ ์ž‘์œผ๋ฉฐ ํฌ์ดˆ ์†์— ๋งŽ์€ ์ˆ˜๊ฐ€ ๋งŒ๋“ค์–ด์ง€๊ธฐ๋„ ํ•œ๋‹ค. ์•”์ˆ˜๋”ด๊ทธ๋ฃจ ๋˜๋Š” ์•”์ˆ˜ํ•œ๊ทธ๋ฃจ์ด๊ณ  ๊ฝƒ๋ฎ์ด๋Š” ๋Œ€๋ถ€๋ถ„ ๊ฝƒ๋ฐ›์นจ๊ณผ ๊ฝƒ๋ถ€๋ฆฌ๋ฅผ ๊ตฌ๋ณ„ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋ณดํ†ต 3์ˆ˜์„ฑ์ด๋‹ค. ์”จ๋ฐฉ์€ ํ•˜์œ„๋กœ 2-15๊ฐœ์˜ ์‹ฌํ”ผ๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ๋Š”๋ฐ, ์‹ฌํ”ผ์˜ ์˜†๋ฉด์€ ์„œ๋กœ ๊ฑฐ์˜ ๋–จ์–ด์ ธ ์žˆ์ง€๋งŒ ๊ฝƒํ„ฑ์˜ ์•ˆ์ชฝ ๋ฉด์ด ๋ถ™์–ด ์žˆ์–ด์„œ ๋งˆ์น˜ ํ•ฉ์ƒ ์‹ฌํ”ผ์ฒ˜๋Ÿผ ๋ณด์ธ๋‹ค. ์‹ฌํ”ผ ์•ˆ์—๋Š” ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋ฐ‘์”จ๊ฐ€ ์ผ์ •ํ•œ ์žฅ์†Œ ์—†์ด ์–ด๋””์—๋‚˜ ๋‹ฌ๋ ค ์žˆ๋‹ค. ์ˆ˜๋ถ„์€ ๋ฌผ์˜ ํ๋ฆ„์ด๋‚˜ ๊ณค์ถฉ์— ์˜ํ•ด์„œ ๋˜๋Š” ์ž‘์€ ์ˆ˜๊ฝƒ์ด ์ž˜๋ฆฐ ํ˜•ํƒœ๋กœ ๋ฌผ ์œ„๋ฅผ ํ˜๋Ÿฌ๋‹ค๋‹ˆ๋‹ค๊ฐ€ ์•”๊ฝƒ์˜ ์•”์ˆ ๋จธ๋ฆฌ์— ๋ถ™์œผ๋ฉด ์ด๋ฃจ์–ด์ง„๋‹ค. - ์ผ๋ฐ˜์ ์œผ๋กœ ์ดˆ๋ณธ์‹๋ฌผ์€ ๋ชฉ๋ณธ์‹๋ฌผ์— ๋น„ํ•ด ๋งค์šฐ ์ž‘์ง€๋งŒ, ํŒŒ์ดˆ์†(๋ฐ”๋‚˜๋‚˜๊ฐ€ ์†ํ•˜๋Š” ์†) ์‹๋ฌผ์ฒ˜๋Ÿผ ์–ด์ง€๊ฐ„ํ•œ ๊ด€๋ชฉ๋ณด๋‹ค ํฌ๊ฒŒ ์ž๋ผ๋Š” ์ดˆ๋ณธ์‹๋ฌผ๋„ ์žˆ๋‹ค. - source_sentence: ๋ฏธ๊ตญ ๋…๋ฆฝ ์ „์Ÿ ์ค‘ ์ข…๊ต์™€ ์ •์น˜์˜ ๋ถ„๋ฆฌ์— ๋Œ€ํ•œ ๋…ผ์˜๋Š” ์–ด๋–ป๊ฒŒ ์ด๋ฃจ์–ด์กŒ๋‚˜์š”? sentences: - ์•„์šธ๋Ÿฌ ์ฒ ํŒ์ด ์•„๋‹Œ FRP ์†Œ์žฌ๋ฅผ ์ ๊ทน ์‚ฌ์šฉํ•˜์—ฌ ์‚ฌ์šฉ์ž๊ฐ€ ์ง์ ‘ ์™ธ๊ด€์„ ํŠœ๋‹ํ•˜๋Š” ๋“œ๋ ˆ์Šคํฌ๋ฉ”์ด์…˜(Dress-formation)์„ ๊ตฌํ˜„ํ•˜์˜€๋‹ค. ์ด๋กœ ์ธํ•ด 3D ํ”„๋ฆฐํ„ฐ๋ฅผ ์ด์šฉํ•ด ๋ฒ”ํผ๋‚˜ ํœ€๋”๋ฅผ ์ง์ ‘ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ๊ฐ€๋Šฅ์ผ€ ํ•˜์˜€๋‹ค. - ๋ฏธ๊ตญ ๋…๋ฆฝ ์ „์Ÿ์˜ ๊ณผ์ •์—์„œ ๋…๋ฆฝ์„ ์–ธ์„œ๋ฅผ ์ฑ„ํƒํ•œ ๋ฏธ๊ตญ์ธ๋“ค์€ ๋Œ€๋ถ€๋ถ„ ์ฒญ๊ต๋„์™€ ๊ฐ™์€ ๊ฐœ์‹ ๊ต ์‹ ์ž์˜€์œผ๋ฉฐ, ๋ฏธ๊ตญ ๋…๋ฆฝ ์„ ์–ธ์„œ์—์„œ ๋งํ•˜๋Š” ์ฒœ๋ถ€์ธ๊ถŒ์€ ๊ฐœ๊ฐœ์ธ์ด ์‹ ์—๊ฒŒ์„œ ๋ฐ›์€ ๊ฒƒ์ด๋ž€ ๋ฏฟ์Œ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๊ณ  ์žˆ์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ์กด ๋กœํฌ์™€ ๊ฐ™์€ ์˜๊ตญ ๊ณ„๋ชฝ์ฃผ์˜ ์‚ฌ์ƒ๊ฐ€๋“ค์˜ ์˜ํ–ฅ์„ ๋ฐ›์•˜๋˜ ์ด๋“ค์€ ์ข…๊ต์™€ ์ •์น˜๊ฐ€ ์—„๊ฒฉํžˆ ๋ถ„๋ฆฌ๋˜์–ด์•ผ ํ•œ๋‹ค๊ณ  ์ƒ๊ฐํ•˜์˜€๊ณ , ์ด๊ฒƒ์€ ๋ฏธ๊ตญ ํ—Œ๋ฒ• ์ œ1์กฐ์— โ€œ์˜ํšŒ๋Š” ํŠน์ • ์ข…๊ต๋ฅผ ๊ตญ๊ต๋กœ ์‚ผ์„ ์ˆ˜ ์—†๋‹คโ€๊ณ  ๋ช…๋ฌธํ™” ๋˜์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ์ข…๊ต์™€ ์ •์น˜๋Š” ์‰ฝ๊ฒŒ ๋ถ„๋ฆฌ๋˜์ง€ ์•Š์•˜๊ณ , ๋ฏธ๊ตญ ๋…๋ฆฝ ์ดํ›„ ํ”ํžˆ WASP๋ผ ๋ถˆ๋ฆฌ๋Š” ๋ฐฑ์ธยท์—ฅ๊ธ€๋กœ์„น์Šจ๊ณ„ยท๊ฐœ์‹ ๊ต๋„๋Š” ๋ฏธ๊ตญ์˜ ํ•ต์‹ฌ ์„ธ๋ ฅ์ด ๋˜์—ˆ๋‹ค. - ์ •๊ต๋ถ„๋ฆฌ์˜ ์ถœ๋ฐœ์€ ๋ฏธ๊ตญํ—Œ๋ฒ•์ด ๋งŒ๋“ค์–ด์งˆ ๋•Œ ๊ตญ๊ต๋ฅผ ๋ถ€์ธํ•˜๋Š”๋ฐ์„œ ์‹œ์ž‘๋œ๋‹ค. ์ •๊ต๋ถ„๋ฆฌ๋Š” ์ž์œ ์˜ ์›๋ฆฌ์ด๋‹ค. ์ •์น˜์™€ ์ข…๊ต๋Š” ๋ถ„๋ฆฌ๋˜์–ด์•ผ ํ•œ๋‹ค๋Š” ์ด์šฉ์–ด ๊ฐœ๋…์€ ์›๋ž˜ ๋ฏธ๊ตญ ํ—Œ๋ฒ• ์ˆ˜์ • 1์กฐ ๊ตํšŒ์™€ ๊ตญ๊ฐ€์˜ ๋ถ„๋ฆฌ๋ผ๋Š” ๋ง๋กœ ์ฒ˜์Œ ์‚ฌ์šฉ๋จ์œผ๋กœ์จ ์ดํ›„ ์„ธ๊ณ„์ ์œผ๋กœ ์ผ๋ฐ˜ํ™”๋˜์–ด ๊ฐ”๋‹ค. ํ•˜์ง€๋งŒ ์„œ์œ ๋Ÿฝ๊ณผ ๋ถ๋ฏธ๋ฅผ ์ œ์™ธํ•œ ์ง€์—ญ์—์„œ๋Š” ๊ตํšŒ โ€“๊ตญ๊ฐ€์˜ ๋ถ„๋ฆฌ๋ผ๋Š” ๋ง๋ณด๋‹ค โ€˜์ •๊ต๋ถ„๋ฆฌโ€™๊ฐ€ ๋” ์ผ๋ฐ˜์ ์œผ๋กœ ์‚ฌ์šฉ๋œ๋‹ค. - source_sentence: ์ˆ˜๋™ ๊ฐ€์Šค ํ……์Šคํ… ์•„ํฌ ์šฉ์ ‘์˜ ์–ด๋ ค์šด ์ ์€ ๋ฌด์—‡์ธ๊ฐ€์š”? sentences: - '์ •์‚ฌ๊ฐ๋ฟ”์˜ ์ ˆ๋‘์ฒด์˜ ๋ถ€ํ”ผ ๊ณต์‹์€ ์ด์ง‘ํŠธ ์ œ13์™•์กฐ(์•ฝ 1850 BC)์— ์“ฐ์ธ ๋ชจ์Šคํฌ๋ฐ” ์ˆ˜ํ•™ ํŒŒํ”ผ๋ฃจ์Šค๋ผ๊ณ  ๋ถˆ๋ฆฌ๋Š” ๊ณ ๋Œ€ ์ด์ง‘ํŠธ ์ˆ˜ํ•™์—์„œ ๋ฐœ๊ฒฌ๋˜์—ˆ๋‹ค: ์—ฌ๊ธฐ์„œ "a"์™€ "b"๋Š” ๊นŽ์€ ๊ฐ๋ฟ”์˜ ๋ฐ‘๋ฉด๊ณผ ์œ—๋ฉด์˜ ๋ณ€์˜ ๊ธธ์ด์ด๊ณ , "h"๋Š” ๋†’์ด์ด๋‹ค. ์ด์ง‘ํŠธ์ธ๋“ค์€ ๊นŽ์€ ์ •์‚ฌ๊ฐ๋ฟ”์˜ ๋ถ€ํ”ผ๋ฅผ ์–ป๋Š” ๊ณต์‹์„ ์•Œ์•˜์ง€๋งŒ, ๋ชจ์Šคํฌ๋ฐ” ํŒŒํ”ผ๋ฃจ์Šค์—์„œ ์ฃผ์–ด์ง„ ์ด ๊ณต์‹์— ๋Œ€ํ•œ ์ฆ๋ช…์€ ์—†๋‹ค.' - ์ˆ˜๋™ ๊ฐ€์Šค ํ……์Šคํ… ์•„ํฌ ์šฉ์ ‘์€ ์šฉ์ ‘๊ธฐ๊ฐ€ ์š”๊ตฌํ•˜๋Š” ์กฐ์ • ๋•Œ๋ฌธ์— ์ƒ๋Œ€์ ์œผ๋กœ ์–ด๋ ค์šด ์šฉ์ ‘ ๋ฐฉ๋ฒ•์ด๋‹ค. ํ† ์น˜ ์šฉ์ ‘๊ณผ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ GTAW๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ๋‘์†์ด ํ•„์š”ํ•˜๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ์‘์šฉ์—์„œ๋Š” ํ•œ์†์œผ๋กœ ์šฉ์ ‘ ์˜์—ญ์— ํ•„๋Ÿฌ ๊ธˆ์†์„ ์ˆ˜๋™์œผ๋กœ ๊ณต๊ธ‰ํ•˜๊ณ  ๋‹ค๋ฅธ ์šฉ์ ‘ ํ† ์น˜๋ฅผ ์กฐ์ž‘ํ•ด์•ผํ•˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์งง์€ ์•„ํฌ ๊ธธ์ด๋ฅผ ์œ ์ง€ํ•˜๋ฉด์„œ ์ „๊ทน๊ณผ ์ž‘์—…๋ฌผ ์‚ฌ์ด์˜ ์ ‘์ด‰์„ ๋ฐฉ์ง€ํ•˜๋Š” ๊ฒƒ๋„ ์ค‘์š”ํ•˜๋‹ค. - '์•„ํฌ์šฉ์ ‘์˜ ์ผ์ข…. ์œต์ ์ด ์ƒ๋‹นํžˆ ๋†’์€ ํ……์Šคํ… ๋ด‰์œผ๋กœ ๋ถ€ํ„ฐ ์•„ํฌ๊ฐ€ ๋ฐœ์ƒํ•ด ๊ทธ ์—ด๋กœ ์žฌ๋ฃŒ๋ฅผ ๋…น์ธ๋‹ค. ๋ฐ˜์ž๋™ ์šฉ์ ‘๊ณผ ๊ฐ™์ด ์‹ค๋“œ๊ฐ€์Šค๋ฅผ ์ด์šฉํ•œ๋‹ค. ๋…น์ด๋Š” ์žฌ๋ฃŒ๋ฅผ ์ฒจ๊ฐ€ํ•˜๋Š”๊ฒƒ๋„ ๊ฐ€๋Šฅํ•˜๋‹ค. ์ •๋ฐ€ํ•œ ์šฉ์ ‘์˜ ๊ฒฝ์šฐ์— ์ข‹์•„ ๊ณ ์•• ํŒŒ์ดํ”„๋‚˜ ์ •๋ฐ€๊ธฐ๊ธฐ์˜ ์šฉ์ ‘ ๋“ฑ์— ์‚ฌ์šฉ๋œ๋‹ค. ๊ณ ์œต์ ์˜ ํ……์Šคํ…์„ ์ „๊ทน์œผ๋กœ ํ•˜๊ธฐ๋•Œ๋ฌธ์— ์ „๊ทน์ž์ฒด์˜ ์†Œ๋ชจ๋Š” ์ ์œผ๋‚˜ ์šฉ์ ‘๊ธˆ์†์„ ๋ถ€๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ์™ผ์†์— ์šฉ์ ‘๋ด‰์„ ๋“ค๊ณ  ์ž‘์—…ํ•ด์•ผํ•œ๋‹ค. ์–‘์†์„ ์‚ฌ์šฉํ•˜๊ธฐ๋•Œ๋ฌธ์— ์ˆ™๋ จ๋„๊ฐ€ ํ•„์š”ํ•˜๋‹ค. ๋น„๊ต์  ๋‚œ์ด๋„๋Š” ๋†’์ง€๋งŒ, ๋น„์ฒ ๊ธˆ์†์— ๋Œ€ํ•œ ์šฉ์ ‘์— ์ ์‘๋ ฅ์ด ๋†’๋‹ค. ์‹ค์ œ๋กœ ์•Œ๋ฃจ๋ฏธ๋Š„์ด๋‚˜ ์Šคํ…Œ์ธ๋ ˆ์Šค ์šฉ์ ‘์„ ์‚ฌ์šฉํ•˜๋ฉด, ์•„ํฌ๊ฐ€ ํ”„๋ผ์ฆˆ๋งˆ์ƒํƒœ๋กœ ๋˜์–ด ๊ฐ€์Šค ์šฉ์ ‘์ด๋‚˜ ๋‚ฉ๋•œ๊ณผ ๊ฐ™์ด ๋…น์•„ ๋ถ™๊ธฐ ๋•Œ๋ฌธ์— ๊ธฐ๋ณธ์ ์œผ๋กœ ๋งž๋Œ€๊ธฐ์šฉ์ ‘ ์ค‘์—์„œ๋Š” ๊ฐ€์žฅ ๊ฐ„๋‹จํ•œ ๋ฐฉ๋ฒ•์ด๋‹ค. ์œ ์ผํ•˜๊ฒŒ ์šฉ์ ‘์ž‘์—… ์‹œ ๋ถˆ๊ฝƒ์ด ํŠ€์ง€ ์•Š์€ ํŠน์ง•์ด ์žˆ๋‹ค.' - source_sentence: ๊ณ ๋ ˆ๋‹คํƒ€ ์นœํ™ฉ์˜ ์ฆ์†์ธ ๋ฏธ๋‚˜๋ชจํ† ๋…ธ ๊ณ ์‡ผ๋Š” ์–ด๋–ค ์—ญํ• ์„ ํ–ˆ๋‚˜์š”? sentences: - ์ œ 58๋Œ€ ๊ณ ์ฝ” ์ฒœํ™ฉ์˜ ์ž์†. ์ œ 1ํ™ฉ์ž ๊ณ ๋ ˆ๋‹คํƒ€ ์นœํ™ฉ(ๆ˜ฏๅฟ ่ฆช็Ž‹)์˜ ์ฆ์† ๋ฏธ๋‚˜๋ชจํ† ๋…ธ ๊ณ ์‡ผ(ๆบๅบทๅฐš)๋Š” ๋ถˆ์ƒ ์ œ์ž‘ ์žฅ์ธ์˜ ์‹œ์กฐ๋กœ, ๊ทธ ๊ณ„ํ†ต์—์„œ ๋ถˆ์ƒ ์ œ์ž‘๊ณต์˜ ๊ฐ ์œ ํŒŒ๊ฐ€ ๋ฐฐ์ถœ๋˜์—ˆ๋‹ค. - ์ง€๋ฐฉ๋„ ์ œ815ํ˜ธ์„ ์€ ์ „๋ผ๋‚จ๋„ ๋ฌด์•ˆ๊ตฐ ์ผ๋กœ์ ์›”์•”๋ฆฌ ์›”์•” ๊ต์ฐจ๋กœ์™€ ํ•จํ‰๊ตฐ ํ•จํ‰์ ๋Œ€๋•๋ฆฌ ๋ฐฑ๊ณก ๊ต์ฐจ๋กœ๋ฅผ ์ž‡๋Š” ์ „๋ผ๋‚จ๋„์˜ ์ง€๋ฐฉ๋„์ด๋‹ค. ์ผ๋กœ ๋‚˜๋“ค๋ชฉ์„ ํ†ตํ•ด ์„œํ•ด์•ˆ๊ณ ์†๋„๋กœ์™€ ์—ฐ๊ฒฐ๋˜๋ฉฐ ๋ฌด์•ˆ๊ตญ์ œ๊ณตํ•ญ์œผ๋กœ ์ด์–ด์ง€๋Š” ๋„๋กœ์ด๊ธฐ๋„ ํ•˜๋‹ค. - ์ œ 88๋Œ€ ๊ณ ์‚ฌ๊ฐ€ ์ฒœํ™ฉ์˜ ์†์ž ๊ณ ๋ ˆ์•ผ์Šค ์นœ์™•(ๆƒŸๅบท่ฆช็Ž‹)์˜ ์ž์†. ๊ณ ์‚ฌ๊ฐ€ ์ฒœํ™ฉ์˜ ์ œ 2ํ™ฉ์ž ๋ฌด๋„ค๋‹ค์นด ์นœ์™•(ๅฎ—ๅฐŠ่ฆช็Ž‹)์ด ๊ฐ€๋งˆ์ฟ ๋ผ ๋ง‰๋ถ€ ์ œ 6๋Œ€ ์‡ผ๊ตฐ์˜ ์ž๋ฆฌ๋ฅผ ์‚ฌํ‡ดํ•œ ๋’ค, ๊ทธ์˜ ์•„๋“ค ์ค‘ ํ•˜๋‚˜๋กœ ์ œ 7๋Œ€ ์‡ผ๊ตฐ์— ์ทจ์ž„ํ•œ ๊ณ ๋ ˆ์•ผ์Šค์—๊ฒŒ ๋ฏธ๋‚˜๋ชจํ†  ์„ฑ์„ ๋‚ด๋ ค ๋ฏธ๋‚˜๋ชจํ† ๋…ธ ๊ณ ๋ ˆ์•ผ์Šค๊ฐ€ ๋˜์—ˆ๋‹ค. ๋‹จ, ๊ทธ ๋’ค ๊ฐ€๋งˆ์ฟ ๋ผ ๋ง‰๋ถ€๊ฐ€ ๊ณ ๋ ˆ์•ผ์Šค๋ฅผ ๊ตํ† ๋กœ ์ถ”๋ฐฉํ•˜๊ณ  ๊ทธ๋ฅผ ๋Œ€์‹ ํ•˜์—ฌ ํžˆ์‚ฌ์•„ํ‚ค ์นœ์™•(ไน…ๆ˜Ž่ฆช็Ž‹)์„ ์‡ผ๊ตฐ์œผ๋กœ ์ถ”๋Œ€ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ๊ทธ ์‚ฌ์ „์ค€๋น„๋กœ์„œ ๊ณ ๋ ˆ์•ผ์Šค๋ฅผ ์นœ์™•์— ์ž„๋ช…ํ•˜๊ฒŒ ํ•˜์—ฌ, ๊ณ ๋ ˆ์•ผ์Šค๋Š” ํ™ฉ์กฑ์œผ๋กœ ๋ณต๊ท€ํ•˜์˜€๋‹ค. ์ฆ‰, ๊ณ ์‚ฌ๊ฐ€ ๊ฒ์ง€๋Š” ๊ณ ๋ ˆ์•ผ์Šค 1๋Œ€๋กœ ๋๋‚ฌ๋‹ค. - source_sentence: ์œ ์„ฑ์ฒด ํ๋ฆ„์€ ์–ด๋–ป๊ฒŒ ๋ถ„ํฌ๋˜์–ด ์žˆ๋‚˜์š”? sentences: - ์œ ์„ฑ์ฒด ํ๋ฆ„์€ ๋Œ€์ฒด๋กœ ๋ชจํ˜œ์„ฑ์˜ ๊ณต์ „๊ถค๋„๋ฅผ ์ค‘์‹ฌ์œผ๋กœ ์›ํ†ตํ˜•์œผ๋กœ ๋ถ„ํฌ๋˜์–ด ์žˆ๋‹ค. ์œ ์„ฑ์ฒด์˜ ๋ฐ€๋„๋Š” ๋ชจํ˜œ์„ฑ์˜ ๊ณต์ „ ๊ถค๋„๋กœ ๊ฐˆ์ˆ˜๋ก ๋†’์•„์ง€๋ฉฐ, ์ง€๊ตฌ๊ฐ€ ์ด๋Ÿฌํ•œ ์œ ์„ฑ์ฒด ํ๋ฆ„์„ ๊ด€ํ†ตํ•  ๋•Œ, ์ค‘์‹ฌ์— ๋‹ค๊ฐ€๊ฐˆ์ˆ˜๋ก ๋” ๋งŽ์€ ์œ ์„ฑ์ฒด๊ฐ€ ์ง€๊ตฌ ๋Œ€๊ธฐ ์†์œผ๋กœ ๋Œ์ž…ํ•˜๊ฒŒ ๋œ๋‹ค. ๋”ฐ๋ผ์„œ ํ•œ ์œ ์„ฑ์šฐ๊ฐ€ ๋‚˜ํƒ€๋‚  ๋•Œ๋Š” ๋งค์ผ ๋‚˜ํƒ€๋‚˜๋Š” ์œ ์„ฑ์˜ ๊ฐœ์ˆ˜๊ฐ€ ์ฆ๊ฐ€ํ•˜๋‹ค๊ฐ€ ๊ฐ์†Œํ•˜๋Š” ๊ฒฝํ–ฅ์„ ๋ค๋‹ค. ๊ด€์ธก์ ์œผ๋กœ ์ง€์ˆ˜ํ•จ์ˆ˜์ ์œผ๋กœ ์ฆ๊ฐ€ํ•˜๋‹ค๊ฐ€ ์ง€์ˆ˜ํ•จ์ˆ˜์ ์œผ๋กœ ๊ฐ์†Œํ•˜๋Š” ๊ฒฝํ–ฅ์„ ๋ณด์ธ๋‹ค. ํ•œ ์œ ์„ฑ์šฐ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋Š” ์‹œ๊ธฐ์˜ ์œ ์„ฑ๊ฐœ์ˆ˜์˜ ๋ณ€ํ™”๋Š”, ์–ด๋–ค ์‹œ์  formula_1์—์„œ formula_2 ์™€ ๊ฐ™์ด ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ๋‹ค. ์œ ์„ฑ์˜ ๊ฐœ์ˆ˜๋Š” formula_3์ผ ๋•Œ ์ตœ๋Œ€๊ฐ€ ๋˜๋Š”๋ฐ, ์ด๊ฒƒ์„ ๊ทน๋Œ€๊ธฐ๋ผ๊ณ  ํ•œ๋‹ค. ๋˜ํ•œ formula_4์˜ ์‹œ๊ฐ„ ๊ทœ๋ชจ๋Š” ์œ ์„ฑ์˜ ๊ฐœ์ˆ˜๊ฐ€ ํ™•์—ฐํ•˜๊ฒŒ ๋ณ€ํ•˜๋Š” ์‹œ๊ฐ„ ๊ทœ๋ชจ์— ํ•ด๋‹นํ•œ๋‹ค. ์ด๋ฅธ๋ฐ” ์ง€์ˆ˜ํ•จ์ˆ˜์  ์‹œ๊ฐ„์ฒ™๋„(e-folding time scale)์ด๋ผ๊ณ  ํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋‹จ์ˆœํžˆ ๋‚˜ํƒ€๋‚˜๋Š” ์œ ์„ฑ์˜ ๊ฐœ์ˆ˜๋ฅผ ์„ธ๊ธฐ๋งŒ ํ•ด๋„ ์ด๋Ÿฌํ•œ ๊ฐ’๋“ค์€ ์ธก์ •ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด๋กœ๋ถ€ํ„ฐ ์ง€๊ตฌ ๊ณต์ „ ๊ถค๋„์ƒ์— ๋†“์—ฌ ์žˆ๋Š” ์œ ์„ฑ์ฒด ํ๋ฆ„์˜ ๋ถ„ํฌ๋ฅผ ์ž์„ธํžˆ ์—ฐ๊ตฌํ•  ์ˆ˜ ์žˆ๋‹ค. - ใ€ŠG.I. ๋ธ”๋ฃจ์Šคใ€‹(G.I. Blues)๋Š” 1960๋…„ ๋ฏธ๊ตญ ๋ฎค์ง€์ปฌ ์ฝ”๋ฏธ๋”” ์˜ํ™”๋กœ ๋…ธ๋จผ ํ„ฐ๋กœ๊ทธ๊ฐ€ ์—ฐ์ถœํ•˜๊ณ  ์—˜๋น„์Šค ํ”„๋ ˆ์Šฌ๋ฆฌ, ์ค„๋ฆฌ์—ฃ ํ”„๋กœ์Šค, ๋กœ๋ฒ„๋ธŒ ์•„์ด๋ฒ„์Šค๊ฐ€ ์ถœ์—ฐํ•œ๋‹ค. ์˜ํ™”๋Š” ํŒŒ๋ผ๋งˆ์šดํŠธ ํ”ฝ์ฒ˜์Šค ์ŠคํŠœ๋””์˜ค์—์„œ ์ดฌ์˜๋˜์—ˆ์œผ๋ฉฐ ํ”„๋ ˆ์Šฌ๋ฆฌ๊ฐ€ ์ œ๋Œ€ํ•˜๊ธฐ ์ „์— ์ „ ์ œ์ž‘ ํ’๊ฒฝ ์žฅ๋ฉด์ด ๋…์ผ์—์„œ ์ดฌ์˜๋˜์—ˆ๋‹ค. ์˜ํ™”๋Š” ใ€Š๋ฒ„๋ผ์ด์–ดํ‹ฐใ€‹์˜ ์ „๊ตญ ๋ฐ•์Šค์˜คํ”ผ์Šค ์ฐจํŠธ์—์„œ 2์œ„๋ฅผ ๋‹ฌ์„ฑํ–ˆ๋‹ค. ๋กœ๋Ÿด ์–ด์›Œ๋“œ์˜ 1960๋…„ ์ตœ๊ณ ์˜ ๋ฎค์ง€์ปฌ ๋ถ€๋ฌธ์—์„œ 2์œ„ ์ƒ์„ ์ˆ˜์ƒํ–ˆ๋‹ค. - ์œ ์„ฑ์ฒด๋Š” ๋Œ€๋ถ€๋ถ„ ํ˜œ์„ฑ์—์„œ ๋–จ์–ด์ ธ ๋‚˜์˜จ ๋ถ€์Šค๋Ÿฌ๊ธฐ์ด๋ฉฐ, ์ผ๋ถ€๋Š” ์†Œํ–‰์„ฑ์—์„œ ๋–จ์–ด์ ธ ๋‚˜์˜จ ๋ถ€์Šค๋Ÿฌ๊ธฐ๋„ ์žˆ๋‹ค. ์œ ์„ฑ์ฒด๋Š” ํ˜œ์„ฑ์ด ํ•ด์— ๊ฐ€๊นŒ์ด ์˜ฌ ๋•Œ๋งˆ๋‹ค ๋ฐฉ์ถœ๋˜๋Š”๋ฐ, ํ•ด์— ์ ‘๊ทผํ•œ ํ˜œ์„ฑ์˜ ์†๋„๋Š” ๋ณดํ†ต ์ˆ˜ ์‹ญ km/s๋ฅผ ๋„˜๋Š”๋‹ค. ์œ ์„ฑ์ฒด๋“ค์ด ํ˜œ์„ฑ์—์„œ ๋–จ์–ด์ ธ ๋‚˜์˜ฌ ๋•Œ, ๋ฐฉ์ถœ ์†๋„๊ฐ€ ์กฐ๊ธˆ์”ฉ ๋‹ค๋ฅด๊ณ  ํ˜œ์„ฑ์ด ๋˜ํ•œ ์ž์ „ํ•˜๋ฏ€๋กœ ์œ ์„ฑ์ฒด๋“ค์˜ ์†๋„ ์„ฑ๋ถ„์€ ํ˜œ์„ฑ์˜ ์†๋„์™€ ์•ฝ๊ฐ„์”ฉ ์ฐจ์ด๊ฐ€ ์ƒ๊ธฐ๊ฒŒ ๋œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ทธ ์–‘์€ ํ˜œ์„ฑ์˜ ์†๋„์— ๋น„ํ•ด ์•„์ฃผ ์ž‘๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด ์ž‘์€ ์†๋„ ์ฐจ์ด ๋•Œ๋ฌธ์— ์œ ์„ฑ์ฒด๋“ค์€ ๋Œ€์ฒด๋กœ ํ˜œ์„ฑ์˜ ๊ถค๋„๋ฅผ ๋”ฐ๋ผ ์šด๋™์„ ํ•˜๋˜ ์•ฝ๊ฐ„์”ฉ ๋‹ค๋ฅธ ๊ถค๋„๋ฅผ ๋Œ๊ฒŒ ๋˜์–ด, ๋งˆ์นจ๋‚ด ํ˜œ์„ฑ์—์„œ ๋‚˜์˜จ ์œ ์„ฑ์ฒด๋“ค์€ ํ˜œ์„ฑ์˜ ๊ณต์ „ ๊ถค๋„๋ฅผ ๋”ฐ๋ผ ๋ ๋ฅผ ํ˜•์„ฑํ•˜๊ฒŒ ๋œ๋‹ค. ๋”๊ตฐ๋‹ค๋‚˜ ํ•œ๋ฒˆ ๋ฐฉ์ถœ๋œ ์œ ์„ฑ์ฒด๋Š” ์ฃผ๋กœ ๋ชฉ์„ฑ๊ณผ ํ•ด์˜ ์ธ๋ ฅ์„ ๋ฐ›๊ฒŒ ๋˜๋ฏ€๋กœ ๋ ๋Š” ์ ์  ๋” ๋„“์–ด์ง€๊ณ  ๊ท ์งˆํ•˜๊ฒŒ ๋œ๋‹ค. ์ด๊ฒƒ์„ ์œ ์„ฑ์ฒด ํ๋ฆ„(meteoroid stream)์ด๋ผ๊ณ  ํ•œ๋‹ค. ์ง€๊ตฌ๊ฐ€ ์œ ์„ฑ์ฒด ํ๋ฆ„์„ ํœฉ์“ธ๊ณ  ์ง€๋‚˜๊ฐˆ ๋•Œ ์œ ์„ฑ์šฐ๊ฐ€ ์ผ์–ด๋‚œ๋‹ค. pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on Alibaba-NLP/gte-multilingual-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base) <!-- at revision 7fc06782350c1a83f88b15dd4b38ef853d3b8503 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: NewModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the ๐Ÿค— Hub model = SentenceTransformer("seongil-dn/gte-further-filtered-neg5") # Run inference sentences = [ '์œ ์„ฑ์ฒด ํ๋ฆ„์€ ์–ด๋–ป๊ฒŒ ๋ถ„ํฌ๋˜์–ด ์žˆ๋‚˜์š”?', '์œ ์„ฑ์ฒด ํ๋ฆ„์€ ๋Œ€์ฒด๋กœ ๋ชจํ˜œ์„ฑ์˜ ๊ณต์ „๊ถค๋„๋ฅผ ์ค‘์‹ฌ์œผ๋กœ ์›ํ†ตํ˜•์œผ๋กœ ๋ถ„ํฌ๋˜์–ด ์žˆ๋‹ค. ์œ ์„ฑ์ฒด์˜ ๋ฐ€๋„๋Š” ๋ชจํ˜œ์„ฑ์˜ ๊ณต์ „ ๊ถค๋„๋กœ ๊ฐˆ์ˆ˜๋ก ๋†’์•„์ง€๋ฉฐ, ์ง€๊ตฌ๊ฐ€ ์ด๋Ÿฌํ•œ ์œ ์„ฑ์ฒด ํ๋ฆ„์„ ๊ด€ํ†ตํ•  ๋•Œ, ์ค‘์‹ฌ์— ๋‹ค๊ฐ€๊ฐˆ์ˆ˜๋ก ๋” ๋งŽ์€ ์œ ์„ฑ์ฒด๊ฐ€ ์ง€๊ตฌ ๋Œ€๊ธฐ ์†์œผ๋กœ ๋Œ์ž…ํ•˜๊ฒŒ ๋œ๋‹ค. ๋”ฐ๋ผ์„œ ํ•œ ์œ ์„ฑ์šฐ๊ฐ€ ๋‚˜ํƒ€๋‚  ๋•Œ๋Š” ๋งค์ผ ๋‚˜ํƒ€๋‚˜๋Š” ์œ ์„ฑ์˜ ๊ฐœ์ˆ˜๊ฐ€ ์ฆ๊ฐ€ํ•˜๋‹ค๊ฐ€ ๊ฐ์†Œํ•˜๋Š” ๊ฒฝํ–ฅ์„ ๋ค๋‹ค. ๊ด€์ธก์ ์œผ๋กœ ์ง€์ˆ˜ํ•จ์ˆ˜์ ์œผ๋กœ ์ฆ๊ฐ€ํ•˜๋‹ค๊ฐ€ ์ง€์ˆ˜ํ•จ์ˆ˜์ ์œผ๋กœ ๊ฐ์†Œํ•˜๋Š” ๊ฒฝํ–ฅ์„ ๋ณด์ธ๋‹ค. ํ•œ ์œ ์„ฑ์šฐ๊ฐ€ ๋‚˜ํƒ€๋‚˜๋Š” ์‹œ๊ธฐ์˜ ์œ ์„ฑ๊ฐœ์ˆ˜์˜ ๋ณ€ํ™”๋Š”, ์–ด๋–ค ์‹œ์  formula_1์—์„œ formula_2 ์™€ ๊ฐ™์ด ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ๋‹ค. ์œ ์„ฑ์˜ ๊ฐœ์ˆ˜๋Š” formula_3์ผ ๋•Œ ์ตœ๋Œ€๊ฐ€ ๋˜๋Š”๋ฐ, ์ด๊ฒƒ์„ ๊ทน๋Œ€๊ธฐ๋ผ๊ณ  ํ•œ๋‹ค. ๋˜ํ•œ formula_4์˜ ์‹œ๊ฐ„ ๊ทœ๋ชจ๋Š” ์œ ์„ฑ์˜ ๊ฐœ์ˆ˜๊ฐ€ ํ™•์—ฐํ•˜๊ฒŒ ๋ณ€ํ•˜๋Š” ์‹œ๊ฐ„ ๊ทœ๋ชจ์— ํ•ด๋‹นํ•œ๋‹ค. ์ด๋ฅธ๋ฐ” ์ง€์ˆ˜ํ•จ์ˆ˜์  ์‹œ๊ฐ„์ฒ™๋„(e-folding time scale)์ด๋ผ๊ณ  ํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋‹จ์ˆœํžˆ ๋‚˜ํƒ€๋‚˜๋Š” ์œ ์„ฑ์˜ ๊ฐœ์ˆ˜๋ฅผ ์„ธ๊ธฐ๋งŒ ํ•ด๋„ ์ด๋Ÿฌํ•œ ๊ฐ’๋“ค์€ ์ธก์ •ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ด๋กœ๋ถ€ํ„ฐ ์ง€๊ตฌ ๊ณต์ „ ๊ถค๋„์ƒ์— ๋†“์—ฌ ์žˆ๋Š” ์œ ์„ฑ์ฒด ํ๋ฆ„์˜ ๋ถ„ํฌ๋ฅผ ์ž์„ธํžˆ ์—ฐ๊ตฌํ•  ์ˆ˜ ์žˆ๋‹ค.', '์œ ์„ฑ์ฒด๋Š” ๋Œ€๋ถ€๋ถ„ ํ˜œ์„ฑ์—์„œ ๋–จ์–ด์ ธ ๋‚˜์˜จ ๋ถ€์Šค๋Ÿฌ๊ธฐ์ด๋ฉฐ, ์ผ๋ถ€๋Š” ์†Œํ–‰์„ฑ์—์„œ ๋–จ์–ด์ ธ ๋‚˜์˜จ ๋ถ€์Šค๋Ÿฌ๊ธฐ๋„ ์žˆ๋‹ค. ์œ ์„ฑ์ฒด๋Š” ํ˜œ์„ฑ์ด ํ•ด์— ๊ฐ€๊นŒ์ด ์˜ฌ ๋•Œ๋งˆ๋‹ค ๋ฐฉ์ถœ๋˜๋Š”๋ฐ, ํ•ด์— ์ ‘๊ทผํ•œ ํ˜œ์„ฑ์˜ ์†๋„๋Š” ๋ณดํ†ต ์ˆ˜ ์‹ญ km/s๋ฅผ ๋„˜๋Š”๋‹ค. ์œ ์„ฑ์ฒด๋“ค์ด ํ˜œ์„ฑ์—์„œ ๋–จ์–ด์ ธ ๋‚˜์˜ฌ ๋•Œ, ๋ฐฉ์ถœ ์†๋„๊ฐ€ ์กฐ๊ธˆ์”ฉ ๋‹ค๋ฅด๊ณ  ํ˜œ์„ฑ์ด ๋˜ํ•œ ์ž์ „ํ•˜๋ฏ€๋กœ ์œ ์„ฑ์ฒด๋“ค์˜ ์†๋„ ์„ฑ๋ถ„์€ ํ˜œ์„ฑ์˜ ์†๋„์™€ ์•ฝ๊ฐ„์”ฉ ์ฐจ์ด๊ฐ€ ์ƒ๊ธฐ๊ฒŒ ๋œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๊ทธ ์–‘์€ ํ˜œ์„ฑ์˜ ์†๋„์— ๋น„ํ•ด ์•„์ฃผ ์ž‘๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด ์ž‘์€ ์†๋„ ์ฐจ์ด ๋•Œ๋ฌธ์— ์œ ์„ฑ์ฒด๋“ค์€ ๋Œ€์ฒด๋กœ ํ˜œ์„ฑ์˜ ๊ถค๋„๋ฅผ ๋”ฐ๋ผ ์šด๋™์„ ํ•˜๋˜ ์•ฝ๊ฐ„์”ฉ ๋‹ค๋ฅธ ๊ถค๋„๋ฅผ ๋Œ๊ฒŒ ๋˜์–ด, ๋งˆ์นจ๋‚ด ํ˜œ์„ฑ์—์„œ ๋‚˜์˜จ ์œ ์„ฑ์ฒด๋“ค์€ ํ˜œ์„ฑ์˜ ๊ณต์ „ ๊ถค๋„๋ฅผ ๋”ฐ๋ผ ๋ ๋ฅผ ํ˜•์„ฑํ•˜๊ฒŒ ๋œ๋‹ค. ๋”๊ตฐ๋‹ค๋‚˜ ํ•œ๋ฒˆ ๋ฐฉ์ถœ๋œ ์œ ์„ฑ์ฒด๋Š” ์ฃผ๋กœ ๋ชฉ์„ฑ๊ณผ ํ•ด์˜ ์ธ๋ ฅ์„ ๋ฐ›๊ฒŒ ๋˜๋ฏ€๋กœ ๋ ๋Š” ์ ์  ๋” ๋„“์–ด์ง€๊ณ  ๊ท ์งˆํ•˜๊ฒŒ ๋œ๋‹ค. ์ด๊ฒƒ์„ ์œ ์„ฑ์ฒด ํ๋ฆ„(meteoroid stream)์ด๋ผ๊ณ  ํ•œ๋‹ค. ์ง€๊ตฌ๊ฐ€ ์œ ์„ฑ์ฒด ํ๋ฆ„์„ ํœฉ์“ธ๊ณ  ์ง€๋‚˜๊ฐˆ ๋•Œ ์œ ์„ฑ์šฐ๊ฐ€ ์ผ์–ด๋‚œ๋‹ค.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 64 - `learning_rate`: 7e-05 - `adam_epsilon`: 1e-07 - `warmup_ratio`: 0.05 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 64 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 7e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-07 - `max_grad_norm`: 1.0 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.05 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: True - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs <details><summary>Click to expand</summary> | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.0026 | 1 | 0.7679 | | 0.0052 | 2 | 0.62 | | 0.0078 | 3 | 0.5875 | | 0.0103 | 4 | 0.5567 | | 0.0129 | 5 | 0.6888 | | 0.0155 | 6 | 0.6659 | | 0.0181 | 7 | 0.6805 | | 0.0207 | 8 | 0.5872 | | 0.0233 | 9 | 0.7301 | | 0.0258 | 10 | 0.4989 | | 0.0284 | 11 | 0.6243 | | 0.0310 | 12 | 0.6136 | | 0.0336 | 13 | 0.6529 | | 0.0362 | 14 | 0.5536 | | 0.0388 | 15 | 0.7124 | | 0.0413 | 16 | 0.5901 | | 0.0439 | 17 | 0.5009 | | 0.0465 | 18 | 0.6692 | | 0.0491 | 19 | 0.5198 | | 0.0517 | 20 | 0.4958 | | 0.0543 | 21 | 0.5647 | | 0.0568 | 22 | 0.5084 | | 0.0594 | 23 | 0.6018 | | 0.0620 | 24 | 0.5501 | | 0.0646 | 25 | 0.6171 | | 0.0672 | 26 | 0.4677 | | 0.0698 | 27 | 0.4531 | | 0.0724 | 28 | 0.5457 | | 0.0749 | 29 | 0.4137 | | 0.0775 | 30 | 0.502 | | 0.0801 | 31 | 0.3585 | | 0.0827 | 32 | 0.4246 | | 0.0853 | 33 | 0.4401 | | 0.0879 | 34 | 0.448 | | 0.0904 | 35 | 0.4464 | | 0.0930 | 36 | 0.4546 | | 0.0956 | 37 | 0.4943 | | 0.0982 | 38 | 0.3874 | | 0.1008 | 39 | 0.4109 | | 0.1034 | 40 | 0.4747 | | 0.1059 | 41 | 0.3218 | | 0.1085 | 42 | 0.2444 | | 0.1111 | 43 | 0.4396 | | 0.1137 | 44 | 0.3343 | | 0.1163 | 45 | 0.4269 | | 0.1189 | 46 | 0.2613 | | 0.1214 | 47 | 0.4472 | | 0.1240 | 48 | 0.3737 | | 0.1266 | 49 | 0.3696 | | 0.1292 | 50 | 0.2962 | | 0.1318 | 51 | 0.3207 | | 0.1344 | 52 | 0.3006 | | 0.1370 | 53 | 0.266 | | 0.1395 | 54 | 0.4126 | | 0.1421 | 55 | 0.2782 | | 0.1447 | 56 | 0.3467 | | 0.1473 | 57 | 0.3688 | | 0.1499 | 58 | 0.3782 | | 0.1525 | 59 | 0.2399 | | 0.1550 | 60 | 0.3389 | | 0.1576 | 61 | 0.2953 | | 0.1602 | 62 | 0.262 | | 0.1628 | 63 | 0.2786 | | 0.1654 | 64 | 0.278 | | 0.1680 | 65 | 0.2649 | | 0.1705 | 66 | 0.2248 | | 0.1731 | 67 | 0.2802 | | 0.1757 | 68 | 0.1902 | | 0.1783 | 69 | 0.2678 | | 0.1809 | 70 | 0.2554 | | 0.1835 | 71 | 0.31 | | 0.1860 | 72 | 0.2631 | | 0.1886 | 73 | 0.2766 | | 0.1912 | 74 | 0.3062 | | 0.1938 | 75 | 0.2294 | | 0.1964 | 76 | 0.1803 | | 0.1990 | 77 | 0.345 | | 0.2016 | 78 | 0.2374 | | 0.2041 | 79 | 0.2737 | | 0.2067 | 80 | 0.2879 | | 0.2093 | 81 | 0.1561 | | 0.2119 | 82 | 0.2342 | | 0.2145 | 83 | 0.1912 | | 0.2171 | 84 | 0.2001 | | 0.2196 | 85 | 0.2577 | | 0.2222 | 86 | 0.236 | | 0.2248 | 87 | 0.2604 | | 0.2274 | 88 | 0.309 | | 0.2300 | 89 | 0.2576 | | 0.2326 | 90 | 0.254 | | 0.2351 | 91 | 0.1699 | | 0.2377 | 92 | 0.3595 | | 0.2403 | 93 | 0.2516 | | 0.2429 | 94 | 0.2495 | | 0.2455 | 95 | 0.2182 | | 0.2481 | 96 | 0.3665 | | 0.2506 | 97 | 0.3084 | | 0.2532 | 98 | 0.3122 | | 0.2558 | 99 | 0.2174 | | 0.2584 | 100 | 0.2536 | | 0.2610 | 101 | 0.1953 | | 0.2636 | 102 | 0.2979 | | 0.2661 | 103 | 0.1005 | | 0.2687 | 104 | 0.3461 | | 0.2713 | 105 | 0.2068 | | 0.2739 | 106 | 0.1989 | | 0.2765 | 107 | 0.3092 | | 0.2791 | 108 | 0.1499 | | 0.2817 | 109 | 0.1323 | | 0.2842 | 110 | 0.1536 | | 0.2868 | 111 | 0.264 | | 0.2894 | 112 | 0.1333 | | 0.2920 | 113 | 0.2626 | | 0.2946 | 114 | 0.2832 | | 0.2972 | 115 | 0.1162 | | 0.2997 | 116 | 0.2126 | | 0.3023 | 117 | 0.201 | | 0.3049 | 118 | 0.2199 | | 0.3075 | 119 | 0.2757 | | 0.3101 | 120 | 0.2305 | | 0.3127 | 121 | 0.2136 | | 0.3152 | 122 | 0.1326 | | 0.3178 | 123 | 0.1717 | | 0.3204 | 124 | 0.2084 | | 0.3230 | 125 | 0.2609 | | 0.3256 | 126 | 0.3399 | | 0.3282 | 127 | 0.2941 | | 0.3307 | 128 | 0.4065 | | 0.3333 | 129 | 0.1987 | | 0.3359 | 130 | 0.1859 | | 0.3385 | 131 | 0.1925 | | 0.3411 | 132 | 0.2456 | | 0.3437 | 133 | 0.2226 | | 0.3463 | 134 | 0.1664 | | 0.3488 | 135 | 0.1657 | | 0.3514 | 136 | 0.2225 | | 0.3540 | 137 | 0.2497 | | 0.3566 | 138 | 0.297 | | 0.3592 | 139 | 0.2724 | | 0.3618 | 140 | 0.1881 | | 0.3643 | 141 | 0.2542 | | 0.3669 | 142 | 0.2917 | | 0.3695 | 143 | 0.1989 | | 0.3721 | 144 | 0.1373 | | 0.3747 | 145 | 0.1697 | | 0.3773 | 146 | 0.2558 | | 0.3798 | 147 | 0.1616 | | 0.3824 | 148 | 0.2284 | | 0.3850 | 149 | 0.1968 | | 0.3876 | 150 | 0.1204 | | 0.3902 | 151 | 0.2593 | | 0.3928 | 152 | 0.3826 | | 0.3953 | 153 | 0.2153 | | 0.3979 | 154 | 0.2661 | | 0.4005 | 155 | 0.2417 | | 0.4031 | 156 | 0.234 | | 0.4057 | 157 | 0.1506 | | 0.4083 | 158 | 0.1771 | | 0.4109 | 159 | 0.1616 | | 0.4134 | 160 | 0.1898 | | 0.4160 | 161 | 0.1969 | | 0.4186 | 162 | 0.2431 | | 0.4212 | 163 | 0.1992 | | 0.4238 | 164 | 0.192 | | 0.4264 | 165 | 0.2028 | | 0.4289 | 166 | 0.2382 | | 0.4315 | 167 | 0.2275 | | 0.4341 | 168 | 0.1574 | | 0.4367 | 169 | 0.2832 | | 0.4393 | 170 | 0.1972 | | 0.4419 | 171 | 0.2315 | | 0.4444 | 172 | 0.2247 | | 0.4470 | 173 | 0.2341 | | 0.4496 | 174 | 0.2244 | | 0.4522 | 175 | 0.1645 | | 0.4548 | 176 | 0.2609 | | 0.4574 | 177 | 0.1761 | | 0.4599 | 178 | 0.4045 | | 0.4625 | 179 | 0.1938 | | 0.4651 | 180 | 0.3102 | | 0.4677 | 181 | 0.1975 | | 0.4703 | 182 | 0.2006 | | 0.4729 | 183 | 0.1991 | | 0.4755 | 184 | 0.164 | | 0.4780 | 185 | 0.2669 | | 0.4806 | 186 | 0.1775 | | 0.4832 | 187 | 0.1271 | | 0.4858 | 188 | 0.2955 | | 0.4884 | 189 | 0.1761 | | 0.4910 | 190 | 0.2153 | | 0.4935 | 191 | 0.1312 | | 0.4961 | 192 | 0.2594 | | 0.4987 | 193 | 0.1715 | | 0.5013 | 194 | 0.2089 | | 0.5039 | 195 | 0.2036 | | 0.5065 | 196 | 0.1404 | | 0.5090 | 197 | 0.2259 | | 0.5116 | 198 | 0.1722 | | 0.5142 | 199 | 0.2353 | | 0.5168 | 200 | 0.2091 | | 0.5194 | 201 | 0.1738 | | 0.5220 | 202 | 0.1803 | | 0.5245 | 203 | 0.1872 | | 0.5271 | 204 | 0.1481 | | 0.5297 | 205 | 0.1634 | | 0.5323 | 206 | 0.3416 | | 0.5349 | 207 | 0.2206 | | 0.5375 | 208 | 0.2167 | | 0.5401 | 209 | 0.199 | | 0.5426 | 210 | 0.1626 | | 0.5452 | 211 | 0.3082 | | 0.5478 | 212 | 0.2092 | | 0.5504 | 213 | 0.2217 | | 0.5530 | 214 | 0.2334 | | 0.5556 | 215 | 0.1734 | | 0.5581 | 216 | 0.2058 | | 0.5607 | 217 | 0.2501 | | 0.5633 | 218 | 0.3214 | | 0.5659 | 219 | 0.1748 | | 0.5685 | 220 | 0.2109 | | 0.5711 | 221 | 0.1062 | | 0.5736 | 222 | 0.3309 | | 0.5762 | 223 | 0.1409 | | 0.5788 | 224 | 0.1875 | | 0.5814 | 225 | 0.2103 | | 0.5840 | 226 | 0.1565 | | 0.5866 | 227 | 0.2551 | | 0.5891 | 228 | 0.2042 | | 0.5917 | 229 | 0.1288 | | 0.5943 | 230 | 0.1366 | | 0.5969 | 231 | 0.1543 | | 0.5995 | 232 | 0.2069 | | 0.6021 | 233 | 0.2953 | | 0.6047 | 234 | 0.2239 | | 0.6072 | 235 | 0.2046 | | 0.6098 | 236 | 0.1682 | | 0.6124 | 237 | 0.2401 | | 0.6150 | 238 | 0.2596 | | 0.6176 | 239 | 0.1951 | | 0.6202 | 240 | 0.2029 | | 0.6227 | 241 | 0.1464 | | 0.6253 | 242 | 0.1661 | | 0.6279 | 243 | 0.1447 | | 0.6305 | 244 | 0.1014 | | 0.6331 | 245 | 0.1757 | | 0.6357 | 246 | 0.1526 | | 0.6382 | 247 | 0.1417 | | 0.6408 | 248 | 0.1654 | | 0.6434 | 249 | 0.2216 | | 0.6460 | 250 | 0.287 | | 0.6486 | 251 | 0.3283 | | 0.6512 | 252 | 0.1765 | | 0.6537 | 253 | 0.184 | | 0.6563 | 254 | 0.2038 | | 0.6589 | 255 | 0.2501 | | 0.6615 | 256 | 0.2285 | | 0.6641 | 257 | 0.2239 | | 0.6667 | 258 | 0.2949 | | 0.6693 | 259 | 0.1532 | | 0.6718 | 260 | 0.2584 | | 0.6744 | 261 | 0.1513 | | 0.6770 | 262 | 0.1326 | | 0.6796 | 263 | 0.2777 | | 0.6822 | 264 | 0.1235 | | 0.6848 | 265 | 0.1843 | | 0.6873 | 266 | 0.2934 | | 0.6899 | 267 | 0.1732 | | 0.6925 | 268 | 0.177 | | 0.6951 | 269 | 0.1428 | | 0.6977 | 270 | 0.1583 | | 0.7003 | 271 | 0.208 | | 0.7028 | 272 | 0.1847 | | 0.7054 | 273 | 0.1349 | | 0.7080 | 274 | 0.1644 | | 0.7106 | 275 | 0.214 | | 0.7132 | 276 | 0.2338 | | 0.7158 | 277 | 0.2421 | | 0.7183 | 278 | 0.1836 | | 0.7209 | 279 | 0.3185 | | 0.7235 | 280 | 0.228 | | 0.7261 | 281 | 0.2234 | | 0.7287 | 282 | 0.2504 | | 0.7313 | 283 | 0.1918 | | 0.7339 | 284 | 0.2107 | | 0.7364 | 285 | 0.1607 | | 0.7390 | 286 | 0.1298 | | 0.7416 | 287 | 0.2802 | | 0.7442 | 288 | 0.1903 | | 0.7468 | 289 | 0.2628 | | 0.7494 | 290 | 0.1593 | | 0.7519 | 291 | 0.1993 | | 0.7545 | 292 | 0.1634 | | 0.7571 | 293 | 0.2143 | | 0.7597 | 294 | 0.2684 | | 0.7623 | 295 | 0.1996 | | 0.7649 | 296 | 0.1374 | | 0.7674 | 297 | 0.1547 | | 0.7700 | 298 | 0.2221 | | 0.7726 | 299 | 0.1802 | | 0.7752 | 300 | 0.2051 | | 0.7778 | 301 | 0.1657 | | 0.7804 | 302 | 0.1539 | | 0.7829 | 303 | 0.1398 | | 0.7855 | 304 | 0.211 | | 0.7881 | 305 | 0.2118 | | 0.7907 | 306 | 0.2215 | | 0.7933 | 307 | 0.1258 | | 0.7959 | 308 | 0.1504 | | 0.7984 | 309 | 0.2606 | | 0.8010 | 310 | 0.1805 | | 0.8036 | 311 | 0.2559 | | 0.8062 | 312 | 0.1002 | | 0.8088 | 313 | 0.2279 | | 0.8114 | 314 | 0.1518 | | 0.8140 | 315 | 0.191 | | 0.8165 | 316 | 0.1891 | | 0.8191 | 317 | 0.1497 | | 0.8217 | 318 | 0.1704 | | 0.8243 | 319 | 0.1839 | | 0.8269 | 320 | 0.132 | | 0.8295 | 321 | 0.2276 | | 0.8320 | 322 | 0.2594 | | 0.8346 | 323 | 0.1868 | | 0.8372 | 324 | 0.1443 | | 0.8398 | 325 | 0.1967 | | 0.8424 | 326 | 0.1041 | | 0.8450 | 327 | 0.2678 | | 0.8475 | 328 | 0.1805 | | 0.8501 | 329 | 0.1565 | | 0.8527 | 330 | 0.1672 | | 0.8553 | 331 | 0.1416 | | 0.8579 | 332 | 0.1541 | | 0.8605 | 333 | 0.177 | | 0.8630 | 334 | 0.098 | | 0.8656 | 335 | 0.2422 | | 0.8682 | 336 | 0.1849 | | 0.8708 | 337 | 0.0895 | | 0.8734 | 338 | 0.2132 | | 0.8760 | 339 | 0.1613 | | 0.8786 | 340 | 0.1912 | | 0.8811 | 341 | 0.2053 | | 0.8837 | 342 | 0.1021 | | 0.8863 | 343 | 0.2787 | | 0.8889 | 344 | 0.1864 | | 0.8915 | 345 | 0.2768 | | 0.8941 | 346 | 0.1357 | | 0.8966 | 347 | 0.1293 | | 0.8992 | 348 | 0.1857 | | 0.9018 | 349 | 0.1266 | | 0.9044 | 350 | 0.1166 | | 0.9070 | 351 | 0.2127 | | 0.9096 | 352 | 0.2263 | | 0.9121 | 353 | 0.2055 | | 0.9147 | 354 | 0.164 | | 0.9173 | 355 | 0.0932 | | 0.9199 | 356 | 0.1028 | | 0.9225 | 357 | 0.142 | | 0.9251 | 358 | 0.1558 | | 0.9276 | 359 | 0.149 | | 0.9302 | 360 | 0.1967 | | 0.9328 | 361 | 0.1272 | | 0.9354 | 362 | 0.2464 | | 0.9380 | 363 | 0.1894 | | 0.9406 | 364 | 0.2198 | | 0.9432 | 365 | 0.1901 | | 0.9457 | 366 | 0.1614 | | 0.9483 | 367 | 0.1307 | | 0.9509 | 368 | 0.1794 | | 0.9535 | 369 | 0.2301 | | 0.9561 | 370 | 0.1924 | | 0.9587 | 371 | 0.2617 | | 0.9612 | 372 | 0.1623 | | 0.9638 | 373 | 0.1443 | | 0.9664 | 374 | 0.2275 | | 0.9690 | 375 | 0.2367 | | 0.9716 | 376 | 0.1893 | | 0.9742 | 377 | 0.2257 | | 0.9767 | 378 | 0.2445 | | 0.9793 | 379 | 0.2034 | | 0.9819 | 380 | 0.2347 | | 0.9845 | 381 | 0.1305 | | 0.9871 | 382 | 0.1996 | | 0.9897 | 383 | 0.1434 | | 0.9922 | 384 | 0.2763 | | 0.9948 | 385 | 0.1748 | | 0.9974 | 386 | 0.2023 | | 1.0 | 387 | 0.1138 | | 1.0026 | 388 | 0.182 | | 1.0052 | 389 | 0.2217 | | 1.0078 | 390 | 0.1567 | | 1.0103 | 391 | 0.1927 | | 1.0129 | 392 | 0.2401 | | 1.0155 | 393 | 0.21 | | 1.0181 | 394 | 0.2667 | | 1.0207 | 395 | 0.2306 | | 1.0233 | 396 | 0.1865 | | 1.0258 | 397 | 0.0838 | | 1.0284 | 398 | 0.165 | | 1.0310 | 399 | 0.1608 | | 1.0336 | 400 | 0.1601 | | 1.0362 | 401 | 0.1399 | | 1.0388 | 402 | 0.2035 | | 1.0413 | 403 | 0.1325 | | 1.0439 | 404 | 0.1175 | | 1.0465 | 405 | 0.2415 | | 1.0491 | 406 | 0.12 | | 1.0517 | 407 | 0.1919 | | 1.0543 | 408 | 0.1639 | | 1.0568 | 409 | 0.0994 | | 1.0594 | 410 | 0.1722 | | 1.0620 | 411 | 0.2044 | | 1.0646 | 412 | 0.2362 | | 1.0672 | 413 | 0.2272 | | 1.0698 | 414 | 0.2148 | | 1.0724 | 415 | 0.2257 | | 1.0749 | 416 | 0.1302 | | 1.0775 | 417 | 0.1836 | | 1.0801 | 418 | 0.0973 | | 1.0827 | 419 | 0.1845 | | 1.0853 | 420 | 0.2031 | | 1.0879 | 421 | 0.1751 | | 1.0904 | 422 | 0.1797 | | 1.0930 | 423 | 0.1789 | | 1.0956 | 424 | 0.1537 | | 1.0982 | 425 | 0.1147 | | 1.1008 | 426 | 0.1214 | | 1.1034 | 427 | 0.2233 | | 1.1059 | 428 | 0.1137 | | 1.1085 | 429 | 0.0887 | | 1.1111 | 430 | 0.1535 | | 1.1137 | 431 | 0.1446 | | 1.1163 | 432 | 0.1788 | | 1.1189 | 433 | 0.1113 | | 1.1214 | 434 | 0.1585 | | 1.1240 | 435 | 0.1116 | | 1.1266 | 436 | 0.1044 | | 1.1292 | 437 | 0.1311 | | 1.1318 | 438 | 0.1835 | | 1.1344 | 439 | 0.1185 | | 1.1370 | 440 | 0.1198 | | 1.1395 | 441 | 0.1567 | | 1.1421 | 442 | 0.1518 | | 1.1447 | 443 | 0.1392 | | 1.1473 | 444 | 0.1552 | | 1.1499 | 445 | 0.1994 | | 1.1525 | 446 | 0.1148 | | 1.1550 | 447 | 0.1939 | | 1.1576 | 448 | 0.1672 | | 1.1602 | 449 | 0.0955 | | 1.1628 | 450 | 0.1521 | | 1.1654 | 451 | 0.1195 | | 1.1680 | 452 | 0.1026 | | 1.1705 | 453 | 0.0847 | | 1.1731 | 454 | 0.1475 | | 1.1757 | 455 | 0.0908 | | 1.1783 | 456 | 0.154 | | 1.1809 | 457 | 0.1033 | | 1.1835 | 458 | 0.1876 | | 1.1860 | 459 | 0.1087 | | 1.1886 | 460 | 0.1425 | | 1.1912 | 461 | 0.2407 | | 1.1938 | 462 | 0.1317 | | 1.1964 | 463 | 0.0819 | | 1.1990 | 464 | 0.1737 | | 1.2016 | 465 | 0.1224 | | 1.2041 | 466 | 0.1347 | | 1.2067 | 467 | 0.1011 | | 1.2093 | 468 | 0.071 | | 1.2119 | 469 | 0.1006 | | 1.2145 | 470 | 0.1182 | | 1.2171 | 471 | 0.0642 | | 1.2196 | 472 | 0.1359 | | 1.2222 | 473 | 0.1492 | | 1.2248 | 474 | 0.1573 | | 1.2274 | 475 | 0.1393 | | 1.2300 | 476 | 0.1126 | | 1.2326 | 477 | 0.1377 | | 1.2351 | 478 | 0.1398 | | 1.2377 | 479 | 0.1944 | | 1.2403 | 480 | 0.1248 | | 1.2429 | 481 | 0.1594 | | 1.2455 | 482 | 0.1209 | | 1.2481 | 483 | 0.2041 | | 1.2506 | 484 | 0.2128 | | 1.2532 | 485 | 0.1167 | | 1.2558 | 486 | 0.114 | | 1.2584 | 487 | 0.1788 | | 1.2610 | 488 | 0.0821 | | 1.2636 | 489 | 0.137 | | 1.2661 | 490 | 0.0511 | | 1.2687 | 491 | 0.2547 | | 1.2713 | 492 | 0.1569 | | 1.2739 | 493 | 0.113 | | 1.2765 | 494 | 0.1901 | | 1.2791 | 495 | 0.0671 | | 1.2817 | 496 | 0.086 | | 1.2842 | 497 | 0.0904 | | 1.2868 | 498 | 0.1443 | | 1.2894 | 499 | 0.1084 | | 1.2920 | 500 | 0.172 | | 1.2946 | 501 | 0.1291 | | 1.2972 | 502 | 0.0481 | | 1.2997 | 503 | 0.1722 | | 1.3023 | 504 | 0.1525 | | 1.3049 | 505 | 0.1231 | | 1.3075 | 506 | 0.1528 | | 1.3101 | 507 | 0.1604 | | 1.3127 | 508 | 0.1446 | | 1.3152 | 509 | 0.0584 | | 1.3178 | 510 | 0.0731 | | 1.3204 | 511 | 0.128 | | 1.3230 | 512 | 0.1482 | | 1.3256 | 513 | 0.227 | | 1.3282 | 514 | 0.1262 | | 1.3307 | 515 | 0.3067 | | 1.3333 | 516 | 0.1197 | | 1.3359 | 517 | 0.1136 | | 1.3385 | 518 | 0.1098 | | 1.3411 | 519 | 0.173 | | 1.3437 | 520 | 0.0962 | | 1.3463 | 521 | 0.0972 | | 1.3488 | 522 | 0.0965 | | 1.3514 | 523 | 0.1618 | | 1.3540 | 524 | 0.15 | | 1.3566 | 525 | 0.2188 | | 1.3592 | 526 | 0.186 | | 1.3618 | 527 | 0.1546 | | 1.3643 | 528 | 0.1107 | | 1.3669 | 529 | 0.1336 | | 1.3695 | 530 | 0.1382 | | 1.3721 | 531 | 0.1081 | | 1.3747 | 532 | 0.0808 | | 1.3773 | 533 | 0.1351 | | 1.3798 | 534 | 0.1112 | | 1.3824 | 535 | 0.104 | | 1.3850 | 536 | 0.0949 | | 1.3876 | 537 | 0.0972 | | 1.3902 | 538 | 0.1416 | | 1.3928 | 539 | 0.2878 | | 1.3953 | 540 | 0.1246 | | 1.3979 | 541 | 0.1605 | | 1.4005 | 542 | 0.2012 | | 1.4031 | 543 | 0.1472 | | 1.4057 | 544 | 0.0939 | | 1.4083 | 545 | 0.1146 | | 1.4109 | 546 | 0.0897 | | 1.4134 | 547 | 0.1545 | | 1.4160 | 548 | 0.1224 | | 1.4186 | 549 | 0.134 | | 1.4212 | 550 | 0.1823 | | 1.4238 | 551 | 0.1636 | | 1.4264 | 552 | 0.1333 | | 1.4289 | 553 | 0.1029 | | 1.4315 | 554 | 0.1856 | | 1.4341 | 555 | 0.1147 | | 1.4367 | 556 | 0.1698 | | 1.4393 | 557 | 0.1202 | | 1.4419 | 558 | 0.1402 | | 1.4444 | 559 | 0.1612 | | 1.4470 | 560 | 0.1623 | | 1.4496 | 561 | 0.1503 | | 1.4522 | 562 | 0.1027 | | 1.4548 | 563 | 0.1812 | | 1.4574 | 564 | 0.0991 | | 1.4599 | 565 | 0.2166 | | 1.4625 | 566 | 0.1367 | | 1.4651 | 567 | 0.215 | | 1.4677 | 568 | 0.1303 | | 1.4703 | 569 | 0.1031 | | 1.4729 | 570 | 0.1407 | | 1.4755 | 571 | 0.0845 | | 1.4780 | 572 | 0.1248 | | 1.4806 | 573 | 0.106 | | 1.4832 | 574 | 0.074 | | 1.4858 | 575 | 0.1855 | | 1.4884 | 576 | 0.0906 | | 1.4910 | 577 | 0.1173 | | 1.4935 | 578 | 0.0889 | | 1.4961 | 579 | 0.1688 | | 1.4987 | 580 | 0.1116 | | 1.5013 | 581 | 0.1711 | | 1.5039 | 582 | 0.1506 | | 1.5065 | 583 | 0.0962 | | 1.5090 | 584 | 0.1381 | | 1.5116 | 585 | 0.1132 | | 1.5142 | 586 | 0.1617 | | 1.5168 | 587 | 0.1476 | | 1.5194 | 588 | 0.0938 | | 1.5220 | 589 | 0.1264 | | 1.5245 | 590 | 0.1138 | | 1.5271 | 591 | 0.0822 | | 1.5297 | 592 | 0.091 | | 1.5323 | 593 | 0.2277 | | 1.5349 | 594 | 0.1301 | | 1.5375 | 595 | 0.1917 | | 1.5401 | 596 | 0.1524 | | 1.5426 | 597 | 0.1021 | | 1.5452 | 598 | 0.2273 | | 1.5478 | 599 | 0.1036 | | 1.5504 | 600 | 0.167 | | 1.5530 | 601 | 0.1483 | | 1.5556 | 602 | 0.1117 | | 1.5581 | 603 | 0.1354 | | 1.5607 | 604 | 0.1454 | | 1.5633 | 605 | 0.3006 | | 1.5659 | 606 | 0.1378 | | 1.5685 | 607 | 0.18 | | 1.5711 | 608 | 0.083 | | 1.5736 | 609 | 0.2083 | | 1.5762 | 610 | 0.0824 | | 1.5788 | 611 | 0.1476 | | 1.5814 | 612 | 0.1499 | | 1.5840 | 613 | 0.1092 | | 1.5866 | 614 | 0.2291 | | 1.5891 | 615 | 0.1121 | | 1.5917 | 616 | 0.0798 | | 1.5943 | 617 | 0.0843 | | 1.5969 | 618 | 0.1143 | | 1.5995 | 619 | 0.1062 | | 1.6021 | 620 | 0.209 | | 1.6047 | 621 | 0.1556 | | 1.6072 | 622 | 0.1828 | | 1.6098 | 623 | 0.1107 | | 1.6124 | 624 | 0.1827 | | 1.6150 | 625 | 0.1885 | | 1.6176 | 626 | 0.1606 | | 1.6202 | 627 | 0.1561 | | 1.6227 | 628 | 0.1256 | | 1.6253 | 629 | 0.077 | | 1.6279 | 630 | 0.0826 | | 1.6305 | 631 | 0.118 | | 1.6331 | 632 | 0.0998 | | 1.6357 | 633 | 0.0782 | | 1.6382 | 634 | 0.1448 | | 1.6408 | 635 | 0.1195 | | 1.6434 | 636 | 0.1879 | | 1.6460 | 637 | 0.1733 | | 1.6486 | 638 | 0.2013 | | 1.6512 | 639 | 0.1088 | | 1.6537 | 640 | 0.1584 | | 1.6563 | 641 | 0.1345 | | 1.6589 | 642 | 0.2369 | | 1.6615 | 643 | 0.1484 | | 1.6641 | 644 | 0.1784 | | 1.6667 | 645 | 0.2001 | | 1.6693 | 646 | 0.1264 | | 1.6718 | 647 | 0.1867 | | 1.6744 | 648 | 0.0808 | | 1.6770 | 649 | 0.0975 | | 1.6796 | 650 | 0.156 | | 1.6822 | 651 | 0.076 | | 1.6848 | 652 | 0.1397 | | 1.6873 | 653 | 0.1591 | | 1.6899 | 654 | 0.1405 | | 1.6925 | 655 | 0.0888 | | 1.6951 | 656 | 0.1066 | | 1.6977 | 657 | 0.0932 | | 1.7003 | 658 | 0.1541 | | 1.7028 | 659 | 0.1614 | | 1.7054 | 660 | 0.0826 | | 1.7080 | 661 | 0.1334 | | 1.7106 | 662 | 0.154 | | 1.7132 | 663 | 0.1452 | | 1.7158 | 664 | 0.1708 | | 1.7183 | 665 | 0.1472 | | 1.7209 | 666 | 0.2017 | | 1.7235 | 667 | 0.1821 | | 1.7261 | 668 | 0.169 | | 1.7287 | 669 | 0.1658 | | 1.7313 | 670 | 0.1081 | | 1.7339 | 671 | 0.1613 | | 1.7364 | 672 | 0.0995 | | 1.7390 | 673 | 0.127 | | 1.7416 | 674 | 0.1893 | | 1.7442 | 675 | 0.1249 | | 1.7468 | 676 | 0.1756 | | 1.7494 | 677 | 0.1034 | | 1.7519 | 678 | 0.1402 | | 1.7545 | 679 | 0.099 | | 1.7571 | 680 | 0.1466 | | 1.7597 | 681 | 0.1805 | | 1.7623 | 682 | 0.0954 | | 1.7649 | 683 | 0.102 | | 1.7674 | 684 | 0.0911 | | 1.7700 | 685 | 0.1214 | | 1.7726 | 686 | 0.1039 | | 1.7752 | 687 | 0.1147 | | 1.7778 | 688 | 0.0865 | | 1.7804 | 689 | 0.1019 | | 1.7829 | 690 | 0.0771 | | 1.7855 | 691 | 0.1347 | | 1.7881 | 692 | 0.1696 | | 1.7907 | 693 | 0.1564 | | 1.7933 | 694 | 0.1041 | | 1.7959 | 695 | 0.1377 | | 1.7984 | 696 | 0.2311 | | 1.8010 | 697 | 0.1562 | | 1.8036 | 698 | 0.1466 | | 1.8062 | 699 | 0.0636 | | 1.8088 | 700 | 0.1792 | | 1.8114 | 701 | 0.0998 | | 1.8140 | 702 | 0.1436 | | 1.8165 | 703 | 0.134 | | 1.8191 | 704 | 0.1326 | | 1.8217 | 705 | 0.1714 | | 1.8243 | 706 | 0.123 | | 1.8269 | 707 | 0.119 | | 1.8295 | 708 | 0.1803 | | 1.8320 | 709 | 0.1752 | | 1.8346 | 710 | 0.1116 | | 1.8372 | 711 | 0.1199 | | 1.8398 | 712 | 0.1444 | | 1.8424 | 713 | 0.0871 | | 1.8450 | 714 | 0.2385 | | 1.8475 | 715 | 0.1565 | | 1.8501 | 716 | 0.1185 | | 1.8527 | 717 | 0.101 | | 1.8553 | 718 | 0.1285 | | 1.8579 | 719 | 0.1247 | | 1.8605 | 720 | 0.1326 | | 1.8630 | 721 | 0.1049 | | 1.8656 | 722 | 0.1918 | | 1.8682 | 723 | 0.1417 | | 1.8708 | 724 | 0.097 | | 1.8734 | 725 | 0.1953 | | 1.8760 | 726 | 0.1396 | | 1.8786 | 727 | 0.1773 | | 1.8811 | 728 | 0.1404 | | 1.8837 | 729 | 0.1049 | | 1.8863 | 730 | 0.2029 | | 1.8889 | 731 | 0.1597 | | 1.8915 | 732 | 0.1989 | | 1.8941 | 733 | 0.0921 | | 1.8966 | 734 | 0.0777 | | 1.8992 | 735 | 0.1241 | | 1.9018 | 736 | 0.1116 | | 1.9044 | 737 | 0.1017 | | 1.9070 | 738 | 0.1241 | | 1.9096 | 739 | 0.1601 | | 1.9121 | 740 | 0.1472 | | 1.9147 | 741 | 0.1218 | | 1.9173 | 742 | 0.0903 | | 1.9199 | 743 | 0.0777 | | 1.9225 | 744 | 0.1115 | | 1.9251 | 745 | 0.109 | | 1.9276 | 746 | 0.1291 | | 1.9302 | 747 | 0.1893 | | 1.9328 | 748 | 0.1234 | | 1.9354 | 749 | 0.25 | | 1.9380 | 750 | 0.1475 | | 1.9406 | 751 | 0.1574 | | 1.9432 | 752 | 0.2231 | | 1.9457 | 753 | 0.1341 | | 1.9483 | 754 | 0.0776 | | 1.9509 | 755 | 0.1712 | | 1.9535 | 756 | 0.1629 | | 1.9561 | 757 | 0.1751 | | 1.9587 | 758 | 0.2061 | | 1.9612 | 759 | 0.1329 | | 1.9638 | 760 | 0.1284 | | 1.9664 | 761 | 0.1937 | | 1.9690 | 762 | 0.1458 | | 1.9716 | 763 | 0.1317 | | 1.9742 | 764 | 0.1141 | | 1.9767 | 765 | 0.2299 | | 1.9793 | 766 | 0.1455 | | 1.9819 | 767 | 0.1535 | | 1.9845 | 768 | 0.1123 | | 1.9871 | 769 | 0.1963 | | 1.9897 | 770 | 0.0977 | | 1.9922 | 771 | 0.1847 | | 1.9948 | 772 | 0.1192 | | 1.9974 | 773 | 0.1481 | | 2.0 | 774 | 0.0941 | | 2.0026 | 775 | 0.1925 | | 2.0052 | 776 | 0.2023 | | 2.0078 | 777 | 0.0936 | | 2.0103 | 778 | 0.161 | | 2.0129 | 779 | 0.1958 | | 2.0155 | 780 | 0.1642 | | 2.0181 | 781 | 0.2644 | | 2.0207 | 782 | 0.1858 | | 2.0233 | 783 | 0.149 | | 2.0258 | 784 | 0.0721 | | 2.0284 | 785 | 0.1602 | | 2.0310 | 786 | 0.083 | | 2.0336 | 787 | 0.1192 | | 2.0362 | 788 | 0.1133 | | 2.0388 | 789 | 0.161 | | 2.0413 | 790 | 0.1 | | 2.0439 | 791 | 0.1142 | | 2.0465 | 792 | 0.1761 | | 2.0491 | 793 | 0.0686 | | 2.0517 | 794 | 0.1064 | | 2.0543 | 795 | 0.1621 | | 2.0568 | 796 | 0.0788 | | 2.0594 | 797 | 0.1472 | | 2.0620 | 798 | 0.1717 | | 2.0646 | 799 | 0.1991 | | 2.0672 | 800 | 0.129 | | 2.0698 | 801 | 0.177 | | 2.0724 | 802 | 0.1344 | | 2.0749 | 803 | 0.1433 | | 2.0775 | 804 | 0.1261 | | 2.0801 | 805 | 0.0999 | | 2.0827 | 806 | 0.1114 | | 2.0853 | 807 | 0.1265 | | 2.0879 | 808 | 0.1632 | | 2.0904 | 809 | 0.1247 | | 2.0930 | 810 | 0.1392 | | 2.0956 | 811 | 0.1489 | | 2.0982 | 812 | 0.1131 | | 2.1008 | 813 | 0.1147 | | 2.1034 | 814 | 0.1957 | | 2.1059 | 815 | 0.0873 | | 2.1085 | 816 | 0.0996 | | 2.1111 | 817 | 0.1317 | | 2.1137 | 818 | 0.087 | | 2.1163 | 819 | 0.1294 | | 2.1189 | 820 | 0.0748 | | 2.1214 | 821 | 0.1382 | | 2.1240 | 822 | 0.0727 | | 2.1266 | 823 | 0.0985 | | 2.1292 | 824 | 0.1322 | | 2.1318 | 825 | 0.1439 | | 2.1344 | 826 | 0.1046 | | 2.1370 | 827 | 0.0978 | | 2.1395 | 828 | 0.1453 | | 2.1421 | 829 | 0.1113 | | 2.1447 | 830 | 0.1313 | | 2.1473 | 831 | 0.1431 | | 2.1499 | 832 | 0.2131 | | 2.1525 | 833 | 0.1018 | | 2.1550 | 834 | 0.0969 | | 2.1576 | 835 | 0.107 | | 2.1602 | 836 | 0.0698 | | 2.1628 | 837 | 0.1345 | | 2.1654 | 838 | 0.1115 | | 2.1680 | 839 | 0.1115 | | 2.1705 | 840 | 0.0778 | | 2.1731 | 841 | 0.1101 | | 2.1757 | 842 | 0.0845 | | 2.1783 | 843 | 0.169 | | 2.1809 | 844 | 0.0887 | | 2.1835 | 845 | 0.1837 | | 2.1860 | 846 | 0.0934 | | 2.1886 | 847 | 0.1031 | | 2.1912 | 848 | 0.2021 | | 2.1938 | 849 | 0.1224 | | 2.1964 | 850 | 0.0763 | | 2.1990 | 851 | 0.1701 | | 2.2016 | 852 | 0.1097 | | 2.2041 | 853 | 0.1054 | | 2.2067 | 854 | 0.1055 | | 2.2093 | 855 | 0.0642 | | 2.2119 | 856 | 0.0964 | | 2.2145 | 857 | 0.0907 | | 2.2171 | 858 | 0.0438 | | 2.2196 | 859 | 0.1099 | | 2.2222 | 860 | 0.0662 | | 2.2248 | 861 | 0.1545 | | 2.2274 | 862 | 0.1122 | | 2.2300 | 863 | 0.0936 | | 2.2326 | 864 | 0.1189 | | 2.2351 | 865 | 0.1155 | | 2.2377 | 866 | 0.2454 | | 2.2403 | 867 | 0.0919 | | 2.2429 | 868 | 0.1388 | | 2.2455 | 869 | 0.1175 | | 2.2481 | 870 | 0.1887 | | 2.2506 | 871 | 0.156 | | 2.2532 | 872 | 0.1174 | | 2.2558 | 873 | 0.0975 | | 2.2584 | 874 | 0.125 | | 2.2610 | 875 | 0.0622 | | 2.2636 | 876 | 0.1722 | | 2.2661 | 877 | 0.0392 | | 2.2687 | 878 | 0.2179 | | 2.2713 | 879 | 0.1214 | | 2.2739 | 880 | 0.0739 | | 2.2765 | 881 | 0.1898 | | 2.2791 | 882 | 0.0633 | | 2.2817 | 883 | 0.0678 | | 2.2842 | 884 | 0.0751 | | 2.2868 | 885 | 0.1197 | | 2.2894 | 886 | 0.0962 | | 2.2920 | 887 | 0.1359 | | 2.2946 | 888 | 0.0795 | | 2.2972 | 889 | 0.0543 | | 2.2997 | 890 | 0.1326 | | 2.3023 | 891 | 0.1348 | | 2.3049 | 892 | 0.1181 | | 2.3075 | 893 | 0.134 | | 2.3101 | 894 | 0.0984 | | 2.3127 | 895 | 0.1143 | | 2.3152 | 896 | 0.0519 | | 2.3178 | 897 | 0.0784 | | 2.3204 | 898 | 0.1062 | | 2.3230 | 899 | 0.1416 | | 2.3256 | 900 | 0.1379 | | 2.3282 | 901 | 0.1259 | | 2.3307 | 902 | 0.2359 | | 2.3333 | 903 | 0.0901 | | 2.3359 | 904 | 0.1005 | | 2.3385 | 905 | 0.1075 | | 2.3411 | 906 | 0.1281 | | 2.3437 | 907 | 0.1083 | | 2.3463 | 908 | 0.0609 | | 2.3488 | 909 | 0.0793 | | 2.3514 | 910 | 0.1184 | | 2.3540 | 911 | 0.1328 | | 2.3566 | 912 | 0.1867 | | 2.3592 | 913 | 0.1976 | | 2.3618 | 914 | 0.1121 | | 2.3643 | 915 | 0.1059 | | 2.3669 | 916 | 0.1417 | | 2.3695 | 917 | 0.1515 | | 2.3721 | 918 | 0.1093 | | 2.3747 | 919 | 0.0735 | | 2.3773 | 920 | 0.1362 | | 2.3798 | 921 | 0.1134 | | 2.3824 | 922 | 0.1356 | | 2.3850 | 923 | 0.075 | | 2.3876 | 924 | 0.0728 | | 2.3902 | 925 | 0.1262 | | 2.3928 | 926 | 0.2486 | | 2.3953 | 927 | 0.1384 | | 2.3979 | 928 | 0.1543 | | 2.4005 | 929 | 0.1447 | | 2.4031 | 930 | 0.1118 | | 2.4057 | 931 | 0.0785 | | 2.4083 | 932 | 0.1008 | | 2.4109 | 933 | 0.0567 | | 2.4134 | 934 | 0.1422 | | 2.4160 | 935 | 0.1267 | | 2.4186 | 936 | 0.1239 | | 2.4212 | 937 | 0.1792 | | 2.4238 | 938 | 0.1396 | | 2.4264 | 939 | 0.1063 | | 2.4289 | 940 | 0.0991 | | 2.4315 | 941 | 0.12 | | 2.4341 | 942 | 0.0853 | | 2.4367 | 943 | 0.1595 | | 2.4393 | 944 | 0.0952 | | 2.4419 | 945 | 0.1225 | | 2.4444 | 946 | 0.1013 | | 2.4470 | 947 | 0.1431 | | 2.4496 | 948 | 0.1648 | | 2.4522 | 949 | 0.1057 | | 2.4548 | 950 | 0.2071 | | 2.4574 | 951 | 0.0992 | | 2.4599 | 952 | 0.2224 | | 2.4625 | 953 | 0.12 | | 2.4651 | 954 | 0.168 | | 2.4677 | 955 | 0.0934 | | 2.4703 | 956 | 0.1027 | | 2.4729 | 957 | 0.1511 | | 2.4755 | 958 | 0.055 | | 2.4780 | 959 | 0.1711 | | 2.4806 | 960 | 0.1041 | | 2.4832 | 961 | 0.0517 | | 2.4858 | 962 | 0.1721 | | 2.4884 | 963 | 0.0752 | | 2.4910 | 964 | 0.1414 | | 2.4935 | 965 | 0.0806 | | 2.4961 | 966 | 0.1239 | | 2.4987 | 967 | 0.1261 | | 2.5013 | 968 | 0.1695 | | 2.5039 | 969 | 0.115 | | 2.5065 | 970 | 0.1079 | | 2.5090 | 971 | 0.1031 | | 2.5116 | 972 | 0.0872 | | 2.5142 | 973 | 0.1775 | | 2.5168 | 974 | 0.1164 | | 2.5194 | 975 | 0.0926 | | 2.5220 | 976 | 0.1239 | | 2.5245 | 977 | 0.1012 | | 2.5271 | 978 | 0.07 | | 2.5297 | 979 | 0.1009 | | 2.5323 | 980 | 0.2477 | | 2.5349 | 981 | 0.1654 | | 2.5375 | 982 | 0.1597 | | 2.5401 | 983 | 0.166 | | 2.5426 | 984 | 0.1027 | | 2.5452 | 985 | 0.214 | | 2.5478 | 986 | 0.0963 | | 2.5504 | 987 | 0.1128 | | 2.5530 | 988 | 0.1474 | | 2.5556 | 989 | 0.1065 | | 2.5581 | 990 | 0.1209 | | 2.5607 | 991 | 0.132 | | 2.5633 | 992 | 0.274 | | 2.5659 | 993 | 0.0845 | | 2.5685 | 994 | 0.1455 | | 2.5711 | 995 | 0.0707 | | 2.5736 | 996 | 0.2082 | | 2.5762 | 997 | 0.0803 | | 2.5788 | 998 | 0.1153 | | 2.5814 | 999 | 0.097 | | 2.5840 | 1000 | 0.0979 | | 2.5866 | 1001 | 0.207 | | 2.5891 | 1002 | 0.1084 | | 2.5917 | 1003 | 0.0725 | | 2.5943 | 1004 | 0.0945 | | 2.5969 | 1005 | 0.1056 | | 2.5995 | 1006 | 0.1284 | | 2.6021 | 1007 | 0.1771 | | 2.6047 | 1008 | 0.1154 | | 2.6072 | 1009 | 0.1597 | | 2.6098 | 1010 | 0.1019 | | 2.6124 | 1011 | 0.1 | | 2.6150 | 1012 | 0.1723 | | 2.6176 | 1013 | 0.1491 | | 2.6202 | 1014 | 0.1447 | | 2.6227 | 1015 | 0.1142 | | 2.6253 | 1016 | 0.0901 | | 2.6279 | 1017 | 0.0805 | | 2.6305 | 1018 | 0.0687 | | 2.6331 | 1019 | 0.1021 | | 2.6357 | 1020 | 0.1089 | | 2.6382 | 1021 | 0.101 | | 2.6408 | 1022 | 0.1154 | | 2.6434 | 1023 | 0.149 | | 2.6460 | 1024 | 0.1731 | | 2.6486 | 1025 | 0.1902 | | 2.6512 | 1026 | 0.106 | | 2.6537 | 1027 | 0.1315 | | 2.6563 | 1028 | 0.1344 | | 2.6589 | 1029 | 0.2004 | | 2.6615 | 1030 | 0.1629 | | 2.6641 | 1031 | 0.1365 | | 2.6667 | 1032 | 0.1638 | | 2.6693 | 1033 | 0.1301 | | 2.6718 | 1034 | 0.1822 | | 2.6744 | 1035 | 0.0965 | | 2.6770 | 1036 | 0.082 | | 2.6796 | 1037 | 0.1501 | | 2.6822 | 1038 | 0.0645 | | 2.6848 | 1039 | 0.1261 | | 2.6873 | 1040 | 0.2367 | | 2.6899 | 1041 | 0.1378 | | 2.6925 | 1042 | 0.1001 | | 2.6951 | 1043 | 0.0973 | | 2.6977 | 1044 | 0.1161 | | 2.7003 | 1045 | 0.1148 | | 2.7028 | 1046 | 0.1242 | | 2.7054 | 1047 | 0.0867 | | 2.7080 | 1048 | 0.1116 | | 2.7106 | 1049 | 0.1502 | | 2.7132 | 1050 | 0.1594 | | 2.7158 | 1051 | 0.1459 | | 2.7183 | 1052 | 0.1533 | | 2.7209 | 1053 | 0.1791 | | 2.7235 | 1054 | 0.1745 | | 2.7261 | 1055 | 0.1128 | | 2.7287 | 1056 | 0.1859 | | 2.7313 | 1057 | 0.0938 | | 2.7339 | 1058 | 0.1103 | | 2.7364 | 1059 | 0.0907 | | 2.7390 | 1060 | 0.0891 | | 2.7416 | 1061 | 0.1897 | | 2.7442 | 1062 | 0.1048 | | 2.7468 | 1063 | 0.1777 | | 2.7494 | 1064 | 0.1196 | | 2.7519 | 1065 | 0.1477 | | 2.7545 | 1066 | 0.113 | | 2.7571 | 1067 | 0.1565 | | 2.7597 | 1068 | 0.2063 | | 2.7623 | 1069 | 0.0883 | | 2.7649 | 1070 | 0.0888 | | 2.7674 | 1071 | 0.0985 | | 2.7700 | 1072 | 0.1242 | | 2.7726 | 1073 | 0.1177 | | 2.7752 | 1074 | 0.1053 | | 2.7778 | 1075 | 0.0638 | | 2.7804 | 1076 | 0.1103 | | 2.7829 | 1077 | 0.0837 | | 2.7855 | 1078 | 0.1347 | | 2.7881 | 1079 | 0.1333 | | 2.7907 | 1080 | 0.1697 | | 2.7933 | 1081 | 0.1057 | | 2.7959 | 1082 | 0.1102 | | 2.7984 | 1083 | 0.1632 | | 2.8010 | 1084 | 0.1295 | | 2.8036 | 1085 | 0.1349 | | 2.8062 | 1086 | 0.0729 | | 2.8088 | 1087 | 0.1628 | | 2.8114 | 1088 | 0.0935 | | 2.8140 | 1089 | 0.1359 | | 2.8165 | 1090 | 0.1262 | | 2.8191 | 1091 | 0.1474 | | 2.8217 | 1092 | 0.1248 | | 2.8243 | 1093 | 0.1124 | | 2.8269 | 1094 | 0.1262 | | 2.8295 | 1095 | 0.2138 | | 2.8320 | 1096 | 0.2028 | | 2.8346 | 1097 | 0.122 | | 2.8372 | 1098 | 0.1275 | | 2.8398 | 1099 | 0.1176 | | 2.8424 | 1100 | 0.0579 | | 2.8450 | 1101 | 0.1725 | | 2.8475 | 1102 | 0.1311 | | 2.8501 | 1103 | 0.1246 | | 2.8527 | 1104 | 0.1132 | | 2.8553 | 1105 | 0.0998 | | 2.8579 | 1106 | 0.1069 | | 2.8605 | 1107 | 0.09 | | 2.8630 | 1108 | 0.0925 | | 2.8656 | 1109 | 0.1689 | | 2.8682 | 1110 | 0.134 | | 2.8708 | 1111 | 0.1002 | | 2.8734 | 1112 | 0.1838 | | 2.8760 | 1113 | 0.1526 | | 2.8786 | 1114 | 0.1513 | | 2.8811 | 1115 | 0.1702 | | 2.8837 | 1116 | 0.101 | | 2.8863 | 1117 | 0.1615 | | 2.8889 | 1118 | 0.0936 | | 2.8915 | 1119 | 0.1835 | | 2.8941 | 1120 | 0.1015 | | 2.8966 | 1121 | 0.0717 | | 2.8992 | 1122 | 0.1218 | | 2.9018 | 1123 | 0.071 | | 2.9044 | 1124 | 0.0987 | | 2.9070 | 1125 | 0.1109 | | 2.9096 | 1126 | 0.12 | | 2.9121 | 1127 | 0.1667 | | 2.9147 | 1128 | 0.1171 | | 2.9173 | 1129 | 0.095 | | 2.9199 | 1130 | 0.0825 | | 2.9225 | 1131 | 0.0654 | | 2.9251 | 1132 | 0.1256 | | 2.9276 | 1133 | 0.1156 | | 2.9302 | 1134 | 0.171 | | 2.9328 | 1135 | 0.0958 | | 2.9354 | 1136 | 0.2148 | | 2.9380 | 1137 | 0.1514 | | 2.9406 | 1138 | 0.1491 | | 2.9432 | 1139 | 0.1478 | | 2.9457 | 1140 | 0.0833 | | 2.9483 | 1141 | 0.0822 | | 2.9509 | 1142 | 0.1612 | | 2.9535 | 1143 | 0.2068 | | 2.9561 | 1144 | 0.155 | | 2.9587 | 1145 | 0.1877 | | 2.9612 | 1146 | 0.1337 | | 2.9638 | 1147 | 0.093 | | 2.9664 | 1148 | 0.1539 | | 2.9690 | 1149 | 0.1659 | | 2.9716 | 1150 | 0.0969 | | 2.9742 | 1151 | 0.1403 | | 2.9767 | 1152 | 0.2031 | | 2.9793 | 1153 | 0.1759 | | 2.9819 | 1154 | 0.1254 | | 2.9845 | 1155 | 0.1242 | | 2.9871 | 1156 | 0.1754 | | 2.9897 | 1157 | 0.0967 | | 2.9922 | 1158 | 0.1602 | | 2.9948 | 1159 | 0.1087 | | 2.9974 | 1160 | 0.1776 | | 3.0 | 1161 | 0.0722 | </details> ### Framework Versions - Python: 3.10.13 - Sentence Transformers: 3.2.1 - Transformers: 4.44.2 - PyTorch: 2.4.0+cu121 - Accelerate: 1.1.1 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
PrunaAI/vit_large_patch14_clip_224.openai-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:18:54Z
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-14T11:22:45Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir vit_large_patch14_clip_224.openai-turbo-tiny-green-smashed huggingface-cli download PrunaAI/vit_large_patch14_clip_224.openai-turbo-tiny-green-smashed --local-dir vit_large_patch14_clip_224.openai-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "vit_large_patch14_clip_224.openai-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "vit_large_patch14_clip_224.openai-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model vit_large_patch14_clip_224.openai before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/resnetaa101d.sw_in12k-turbo-green-smashed
PrunaAI
2024-11-13T13:18:53Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-10T09:32:40Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir resnetaa101d.sw_in12k-turbo-green-smashed huggingface-cli download PrunaAI/resnetaa101d.sw_in12k-turbo-green-smashed --local-dir resnetaa101d.sw_in12k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "resnetaa101d.sw_in12k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "resnetaa101d.sw_in12k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model resnetaa101d.sw_in12k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/tresnet_l.miil_in1k_448-turbo-green-smashed
PrunaAI
2024-11-13T13:18:51Z
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-19T13:22:38Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir tresnet_l.miil_in1k_448-turbo-green-smashed huggingface-cli download PrunaAI/tresnet_l.miil_in1k_448-turbo-green-smashed --local-dir tresnet_l.miil_in1k_448-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "tresnet_l.miil_in1k_448-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "tresnet_l.miil_in1k_448-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model tresnet_l.miil_in1k_448 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/vit_base_patch8_224.dino-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:18:47Z
3
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-14T10:57:38Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir vit_base_patch8_224.dino-turbo-tiny-green-smashed huggingface-cli download PrunaAI/vit_base_patch8_224.dino-turbo-tiny-green-smashed --local-dir vit_base_patch8_224.dino-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "vit_base_patch8_224.dino-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "vit_base_patch8_224.dino-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model vit_base_patch8_224.dino before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/vit_base_patch16_224.augreg_in21k-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:18:46Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-14T10:58:13Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir vit_base_patch16_224.augreg_in21k-turbo-tiny-green-smashed huggingface-cli download PrunaAI/vit_base_patch16_224.augreg_in21k-turbo-tiny-green-smashed --local-dir vit_base_patch16_224.augreg_in21k-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "vit_base_patch16_224.augreg_in21k-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "vit_base_patch16_224.augreg_in21k-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model vit_base_patch16_224.augreg_in21k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/convnext_femto_ols.d1_in1k-turbo-green-smashed
PrunaAI
2024-11-13T13:18:44Z
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-07T16:42:39Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir convnext_femto_ols.d1_in1k-turbo-green-smashed huggingface-cli download PrunaAI/convnext_femto_ols.d1_in1k-turbo-green-smashed --local-dir convnext_femto_ols.d1_in1k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "convnext_femto_ols.d1_in1k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "convnext_femto_ols.d1_in1k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model convnext_femto_ols.d1_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/resnet50-turbo-green-smashed
PrunaAI
2024-11-13T13:18:41Z
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-07T13:39:41Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir resnet50-turbo-green-smashed huggingface-cli download PrunaAI/resnet50-turbo-green-smashed --local-dir resnet50-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "resnet50-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "resnet50-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model resnet50 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/maxvit_small_tf_224.in1k-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:18:35Z
4
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-10T05:15:08Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir maxvit_small_tf_224.in1k-turbo-tiny-green-smashed huggingface-cli download PrunaAI/maxvit_small_tf_224.in1k-turbo-tiny-green-smashed --local-dir maxvit_small_tf_224.in1k-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "maxvit_small_tf_224.in1k-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "maxvit_small_tf_224.in1k-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model maxvit_small_tf_224.in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:18:34Z
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-19T13:06:36Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir vit_base_patch32_clip_384.laion2b_ft_in12k_in1k-turbo-tiny-green-smashed huggingface-cli download PrunaAI/vit_base_patch32_clip_384.laion2b_ft_in12k_in1k-turbo-tiny-green-smashed --local-dir vit_base_patch32_clip_384.laion2b_ft_in12k_in1k-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "vit_base_patch32_clip_384.laion2b_ft_in12k_in1k-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "vit_base_patch32_clip_384.laion2b_ft_in12k_in1k-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model vit_base_patch32_clip_384.laion2b_ft_in12k_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/mobilenetv2_120d.ra_in1k-turbo-green-smashed
PrunaAI
2024-11-13T13:18:33Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-10T05:42:32Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir mobilenetv2_120d.ra_in1k-turbo-green-smashed huggingface-cli download PrunaAI/mobilenetv2_120d.ra_in1k-turbo-green-smashed --local-dir mobilenetv2_120d.ra_in1k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "mobilenetv2_120d.ra_in1k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "mobilenetv2_120d.ra_in1k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model mobilenetv2_120d.ra_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/mobilevitv2_150.cvnets_in22k_ft_in1k-turbo-green-smashed
PrunaAI
2024-11-13T13:18:27Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-10T05:57:04Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir mobilevitv2_150.cvnets_in22k_ft_in1k-turbo-green-smashed huggingface-cli download PrunaAI/mobilevitv2_150.cvnets_in22k_ft_in1k-turbo-green-smashed --local-dir mobilevitv2_150.cvnets_in22k_ft_in1k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "mobilevitv2_150.cvnets_in22k_ft_in1k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "mobilevitv2_150.cvnets_in22k_ft_in1k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model mobilevitv2_150.cvnets_in22k_ft_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/resnet18.a2_in1k-turbo-green-smashed
PrunaAI
2024-11-13T13:18:25Z
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-10T08:45:28Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir resnet18.a2_in1k-turbo-green-smashed huggingface-cli download PrunaAI/resnet18.a2_in1k-turbo-green-smashed --local-dir resnet18.a2_in1k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "resnet18.a2_in1k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "resnet18.a2_in1k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model resnet18.a2_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/gcresnet33ts.ra2_in1k-turbo-green-smashed
PrunaAI
2024-11-13T13:18:17Z
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-07T19:13:14Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir gcresnet33ts.ra2_in1k-turbo-green-smashed huggingface-cli download PrunaAI/gcresnet33ts.ra2_in1k-turbo-green-smashed --local-dir gcresnet33ts.ra2_in1k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "gcresnet33ts.ra2_in1k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "gcresnet33ts.ra2_in1k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model gcresnet33ts.ra2_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/resnet34.a1_in1k-turbo-green-smashed
PrunaAI
2024-11-13T13:18:15Z
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-10T08:53:01Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir resnet34.a1_in1k-turbo-green-smashed huggingface-cli download PrunaAI/resnet34.a1_in1k-turbo-green-smashed --local-dir resnet34.a1_in1k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "resnet34.a1_in1k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "resnet34.a1_in1k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model resnet34.a1_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/mobilenetv2_110d.ra_in1k-turbo-green-smashed
PrunaAI
2024-11-13T13:18:14Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-10T05:41:23Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir mobilenetv2_110d.ra_in1k-turbo-green-smashed huggingface-cli download PrunaAI/mobilenetv2_110d.ra_in1k-turbo-green-smashed --local-dir mobilenetv2_110d.ra_in1k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "mobilenetv2_110d.ra_in1k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "mobilenetv2_110d.ra_in1k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model mobilenetv2_110d.ra_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/resnet101.tv_in1k-turbo-green-smashed
PrunaAI
2024-11-13T13:18:13Z
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-10T09:16:01Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir resnet101.tv_in1k-turbo-green-smashed huggingface-cli download PrunaAI/resnet101.tv_in1k-turbo-green-smashed --local-dir resnet101.tv_in1k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "resnet101.tv_in1k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "resnet101.tv_in1k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model resnet101.tv_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/ecaresnet269d.ra2_in1k-turbo-green-smashed
PrunaAI
2024-11-13T13:18:12Z
3
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-19T10:38:14Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir ecaresnet269d.ra2_in1k-turbo-green-smashed huggingface-cli download PrunaAI/ecaresnet269d.ra2_in1k-turbo-green-smashed --local-dir ecaresnet269d.ra2_in1k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "ecaresnet269d.ra2_in1k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "ecaresnet269d.ra2_in1k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model ecaresnet269d.ra2_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/vit_tiny_patch16_384.augreg_in21k_ft_in1k-turbo-green-smashed
PrunaAI
2024-11-13T13:18:09Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-19T13:09:24Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir vit_tiny_patch16_384.augreg_in21k_ft_in1k-turbo-green-smashed huggingface-cli download PrunaAI/vit_tiny_patch16_384.augreg_in21k_ft_in1k-turbo-green-smashed --local-dir vit_tiny_patch16_384.augreg_in21k_ft_in1k-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "vit_tiny_patch16_384.augreg_in21k_ft_in1k-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "vit_tiny_patch16_384.augreg_in21k_ft_in1k-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. import torch; image = torch.rand(1, 3, 224, 224).to('cuda') smashed_model(image) ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model vit_tiny_patch16_384.augreg_in21k_ft_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/cerspense-zeroscope_v1-1_320s-turbo-green-smashed
PrunaAI
2024-11-13T13:17:16Z
1
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-27T18:19:33Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir cerspense-zeroscope_v1-1_320s-turbo-green-smashed huggingface-cli download PrunaAI/cerspense-zeroscope_v1-1_320s-turbo-green-smashed --local-dir cerspense-zeroscope_v1-1_320s-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "cerspense-zeroscope_v1-1_320s-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "cerspense-zeroscope_v1-1_320s-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. prompt = 'A knife is slicing a fruit' smashed_model(prompt=prompt, height=256, width=256).frames[0] ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model cerspense/zeroscope_v1-1_320s before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/cerspense-zeroscope_v2_30x448x256-turbo-green-smashed
PrunaAI
2024-11-13T13:17:13Z
4
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-27T18:17:50Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/vb6SmA3hxu) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir cerspense-zeroscope_v2_30x448x256-turbo-green-smashed huggingface-cli download PrunaAI/cerspense-zeroscope_v2_30x448x256-turbo-green-smashed --local-dir cerspense-zeroscope_v2_30x448x256-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "cerspense-zeroscope_v2_30x448x256-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "cerspense-zeroscope_v2_30x448x256-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. prompt = 'A knife is slicing a fruit' smashed_model(prompt=prompt, height=256, width=256).frames[0] ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model cerspense/zeroscope_v2_30x448x256 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/cerspense-zeroscope_v2_576w-turbo-green-smashed
PrunaAI
2024-11-13T13:17:11Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T23:16:07Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir cerspense-zeroscope_v2_576w-turbo-green-smashed huggingface-cli download PrunaAI/cerspense-zeroscope_v2_576w-turbo-green-smashed --local-dir cerspense-zeroscope_v2_576w-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "cerspense-zeroscope_v2_576w-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "cerspense-zeroscope_v2_576w-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='A knife is slicing a fruit', height=256, width=256).frames[0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model cerspense/zeroscope_v2_576w before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/juggernaut-turbo-green-smashed
PrunaAI
2024-11-13T13:16:55Z
3
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T23:05:39Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir juggernaut-turbo-green-smashed huggingface-cli download PrunaAI/juggernaut-turbo-green-smashed --local-dir juggernaut-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "juggernaut-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "juggernaut-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/274039 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/stabilityai-sdxl-turbo-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:16:54Z
7
2
pruna-engine
[ "pruna-engine", "license:apache-2.0", "region:us" ]
null
2024-02-12T14:13:59Z
--- license: apache-2.0 library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.6.0 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir stabilityai-sdxl-turbo-turbo-tiny-green-smashed huggingface-cli download PrunaAI/stabilityai-sdxl-turbo-turbo-tiny-green-smashed --local-dir stabilityai-sdxl-turbo-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "stabilityai-sdxl-turbo-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "stabilityai-sdxl-turbo-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input of. ``` ## Configurations The configuration info are in `config.json`. ## Credits & License We follow the same license as the original model. Please check the license of the original model stabilityai/sdxl-turbo before using this model which provided the base model. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/yehiaserag-anime-pencil-diffusion-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:16:52Z
5
2
pruna-engine
[ "pruna-engine", "license:apache-2.0", "region:us" ]
null
2024-01-29T16:39:01Z
--- license: apache-2.0 library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.6.0 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir yehiaserag-anime-pencil-diffusion-turbo-tiny-green-smashed huggingface-cli download PrunaAI/yehiaserag-anime-pencil-diffusion-turbo-tiny-green-smashed --local-dir yehiaserag-anime-pencil-diffusion-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "yehiaserag-anime-pencil-diffusion-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "yehiaserag-anime-pencil-diffusion-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=512, width=512)[0][0] # Run the model where x is the expected input of. ``` ## Configurations The configuration info are in `config.json`. ## Credits & License We follow the same license as the original model. Please check the license of the original model yehiaserag/anime-pencil-diffusion before using this model which provided the base model. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/sdxl-yamers-realistic-5-turbo-green-smashed
PrunaAI
2024-11-13T13:16:50Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T22:41:53Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir sdxl-yamers-realistic-5-turbo-green-smashed huggingface-cli download PrunaAI/sdxl-yamers-realistic-5-turbo-green-smashed --local-dir sdxl-yamers-realistic-5-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "sdxl-yamers-realistic-5-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "sdxl-yamers-realistic-5-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/299716 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/newrealityxl-all-in-one-photographic-turbo-green-smashed
PrunaAI
2024-11-13T13:16:47Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T22:45:23Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir newrealityxl-all-in-one-photographic-turbo-green-smashed huggingface-cli download PrunaAI/newrealityxl-all-in-one-photographic-turbo-green-smashed --local-dir newrealityxl-all-in-one-photographic-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "newrealityxl-all-in-one-photographic-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "newrealityxl-all-in-one-photographic-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/312982 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/segmind-SSD-1B-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:16:45Z
10
5
pruna-engine
[ "pruna-engine", "region:us" ]
null
2023-11-24T16:03:23Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.6.0 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir segmind-SSD-1B-turbo-tiny-green-smashed huggingface-cli download PrunaAI/segmind-SSD-1B-turbo-tiny-green-smashed --local-dir segmind-SSD-1B-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "segmind-SSD-1B-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "segmind-SSD-1B-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input of. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model segmind/SSD-1B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/leosams-helloworld-xl-turbo-green-smashed
PrunaAI
2024-11-13T13:16:44Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T23:48:42Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir leosams-helloworld-xl-turbo-green-smashed huggingface-cli download PrunaAI/leosams-helloworld-xl-turbo-green-smashed --local-dir leosams-helloworld-xl-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "leosams-helloworld-xl-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "leosams-helloworld-xl-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/338512 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/sdxlnijispecial-edition-turbo-green-smashed
PrunaAI
2024-11-13T13:16:43Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T23:40:47Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir sdxlnijispecial-edition-turbo-green-smashed huggingface-cli download PrunaAI/sdxlnijispecial-edition-turbo-green-smashed --local-dir sdxlnijispecial-edition-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "sdxlnijispecial-edition-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "sdxlnijispecial-edition-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/154625?type=Model&format=SafeTensor&size=full&fp=fp16 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/dreamlike-art-dreamlike-diffusion-1.0-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:16:41Z
12
2
pruna-engine
[ "pruna-engine", "license:apache-2.0", "region:us" ]
null
2024-01-29T17:43:54Z
--- license: apache-2.0 library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.6.0 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir dreamlike-art-dreamlike-diffusion-1.0-turbo-tiny-green-smashed huggingface-cli download PrunaAI/dreamlike-art-dreamlike-diffusion-1.0-turbo-tiny-green-smashed --local-dir dreamlike-art-dreamlike-diffusion-1.0-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "dreamlike-art-dreamlike-diffusion-1.0-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "dreamlike-art-dreamlike-diffusion-1.0-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=512, width=512)[0][0] # Run the model where x is the expected input of. ``` ## Configurations The configuration info are in `config.json`. ## Credits & License We follow the same license as the original model. Please check the license of the original model dreamlike-art/dreamlike-diffusion-1.0 before using this model which provided the base model. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/wesumix-real-fantasy-5-turbo-green-smashed
PrunaAI
2024-11-13T13:16:39Z
3
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T23:00:42Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir wesumix-real-fantasy-5-turbo-green-smashed huggingface-cli download PrunaAI/wesumix-real-fantasy-5-turbo-green-smashed --local-dir wesumix-real-fantasy-5-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "wesumix-real-fantasy-5-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "wesumix-real-fantasy-5-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/108403 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/dreamlike-art-dreamlike-anime-1.0-turbo-green-smashed
PrunaAI
2024-11-13T13:16:37Z
3
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-25T18:55:44Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 2. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir dreamlike-art-dreamlike-anime-1.0-turbo-green-smashed huggingface-cli download PrunaAI/dreamlike-art-dreamlike-anime-1.0-turbo-green-smashed --local-dir dreamlike-art-dreamlike-anime-1.0-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "dreamlike-art-dreamlike-anime-1.0-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "dreamlike-art-dreamlike-anime-1.0-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. prompt = 'Beautiful fruits in trees' smashed_model(prompt=prompt, height=512, width=512)[0][0] ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model dreamlike-art/dreamlike-anime-1.0 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/zavychromaxl-turbo-green-smashed
PrunaAI
2024-11-13T13:16:36Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T22:44:00Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir zavychromaxl-turbo-green-smashed huggingface-cli download PrunaAI/zavychromaxl-turbo-green-smashed --local-dir zavychromaxl-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "zavychromaxl-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "zavychromaxl-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/362861?type=Model&format=SafeTensor&size=full&fp=fp16 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/picxreal-turbo-green-smashed
PrunaAI
2024-11-13T13:16:35Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T21:25:50Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir picxreal-turbo-green-smashed huggingface-cli download PrunaAI/picxreal-turbo-green-smashed --local-dir picxreal-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "picxreal-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "picxreal-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=512, width=512)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/272376 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/absolutereality-turbo-green-smashed
PrunaAI
2024-11-13T13:16:33Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T22:52:18Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir absolutereality-turbo-green-smashed huggingface-cli download PrunaAI/absolutereality-turbo-green-smashed --local-dir absolutereality-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "absolutereality-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "absolutereality-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/132760?type=Model&format=SafeTensor&size=pruned&fp=fp16 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/bluepencil-xl-turbo-green-smashed
PrunaAI
2024-11-13T13:16:32Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T22:36:24Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir bluepencil-xl-turbo-green-smashed huggingface-cli download PrunaAI/bluepencil-xl-turbo-green-smashed --local-dir bluepencil-xl-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "bluepencil-xl-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "bluepencil-xl-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/323375 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/Linaqruf-animagine-xl-turbo-green-smashed
PrunaAI
2024-11-13T13:16:28Z
1
3
pruna-engine
[ "pruna-engine", "license:apache-2.0", "region:us" ]
null
2024-01-29T18:18:02Z
--- license: apache-2.0 library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Share feedback and suggestions on the Slack of Pruna AI (Coming soon!). ## Results ![image info](./plots.png) **Important remarks:** - The quality of the model output might slightly vary compared to the base model. There might be minimal quality loss. - These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in config.json and are obtained after a hardware warmup. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). - You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). ## Setup You can run the smashed model with these steps: 0. Check cuda, torch, packaging requirements are installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. For packaging and torch, run `pip install packaging torch`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take 15 minutes to install. ```bash pip install pruna-engine[gpu] --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir Linaqruf-animagine-xl-turbo-green-smashed huggingface-cli download PrunaAI/Linaqruf-animagine-xl-turbo-green-smashed --local-dir Linaqruf-animagine-xl-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "Linaqruf-animagine-xl-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "Linaqruf-animagine-xl-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input of. ``` ## Configurations The configuration info are in `config.json`. ## License We follow the same license as the original model. Please check the license of the original model Linaqruf/animagine-xl before using this model. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/stabilityai-stable-diffusion-xl-base-1.0-turbo-green-smashed
PrunaAI
2024-11-13T13:16:27Z
5
7
pruna-engine
[ "pruna-engine", "license:apache-2.0", "region:us" ]
null
2023-11-24T03:53:37Z
--- license: apache-2.0 library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Share feedback and suggestions on the Slack of Pruna AI (Coming soon!). ## Results ![image info](./plots.png) **Important remarks:** - The quality of the model output might slightly vary compared to the base model. There might be minimal quality loss. - These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in config.json and are obtained after a hardware warmup. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). - You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). ## Setup You can run the smashed model with these steps: 0. Check cuda, torch, packaging requirements are installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. For packaging and torch, run `pip install packaging torch`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take 15 minutes to install. ```bash pip install pruna-engine[gpu] --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir stabilityai-stable-diffusion-xl-base-1.0-turbo-green-smashed huggingface-cli download PrunaAI/stabilityai-stable-diffusion-xl-base-1.0-turbo-green-smashed --local-dir stabilityai-stable-diffusion-xl-base-1.0-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "stabilityai-stable-diffusion-xl-base-1.0-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "stabilityai-stable-diffusion-xl-base-1.0-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input of. ``` ## Configurations The configuration info are in `config.json`. ## License We follow the same license as the original model. Please check the license of the original model stabilityai/stable-diffusion-xl-base-1.0 before using this model. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/pony-diffusion-v6-xl-turbo-green-smashed
PrunaAI
2024-11-13T13:16:27Z
3
1
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T22:27:35Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir pony-diffusion-v6-xl-turbo-green-smashed huggingface-cli download PrunaAI/pony-diffusion-v6-xl-turbo-green-smashed --local-dir pony-diffusion-v6-xl-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "pony-diffusion-v6-xl-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "pony-diffusion-v6-xl-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/290640 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/CompVis-stable-diffusion-v1-4-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:16:25Z
8
2
pruna-engine
[ "pruna-engine", "license:apache-2.0", "region:us" ]
null
2023-11-23T03:25:53Z
--- license: apache-2.0 library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.6.0 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir CompVis-stable-diffusion-v1-4-turbo-tiny-green-smashed huggingface-cli download PrunaAI/CompVis-stable-diffusion-v1-4-turbo-tiny-green-smashed --local-dir CompVis-stable-diffusion-v1-4-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "CompVis-stable-diffusion-v1-4-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "CompVis-stable-diffusion-v1-4-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=512, width=512)[0][0] # Run the model where x is the expected input of. ``` ## Configurations The configuration info are in `config.json`. ## Credits & License We follow the same license as the original model. Please check the license of the original model CompVis/stable-diffusion-v1-4 before using this model which provided the base model. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/stabilityai-stable-diffusion-xl-base-1.0-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:16:24Z
9
3
pruna-engine
[ "pruna-engine", "license:apache-2.0", "region:us" ]
null
2024-02-12T19:29:30Z
--- license: apache-2.0 library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.6.0 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir stabilityai-stable-diffusion-xl-base-1.0-turbo-tiny-green-smashed huggingface-cli download PrunaAI/stabilityai-stable-diffusion-xl-base-1.0-turbo-tiny-green-smashed --local-dir stabilityai-stable-diffusion-xl-base-1.0-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "stabilityai-stable-diffusion-xl-base-1.0-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "stabilityai-stable-diffusion-xl-base-1.0-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input of. ``` ## Configurations The configuration info are in `config.json`. ## Credits & License We follow the same license as the original model. Please check the license of the original model stabilityai/stable-diffusion-xl-base-1.0 before using this model which provided the base model. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/devlishphotorealism-sdxl-turbo-green-smashed
PrunaAI
2024-11-13T13:16:22Z
2
0
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T23:18:26Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir devlishphotorealism-sdxl-turbo-green-smashed huggingface-cli download PrunaAI/devlishphotorealism-sdxl-turbo-green-smashed --local-dir devlishphotorealism-sdxl-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "devlishphotorealism-sdxl-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "devlishphotorealism-sdxl-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/213449?type=Model&format=SafeTensor&size=pruned&fp=fp16 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/prompthero-openjourney-turbo-green-smashed
PrunaAI
2024-11-13T13:16:17Z
5
2
pruna-engine
[ "pruna-engine", "license:apache-2.0", "region:us" ]
null
2024-02-05T13:32:47Z
--- license: apache-2.0 library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Share feedback and suggestions on the Slack of Pruna AI (Coming soon!). ## Results ![image info](./plots.png) **Important remarks:** - The quality of the model output might slightly vary compared to the base model. There might be minimal quality loss. - These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in config.json and are obtained after a hardware warmup. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). - You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). ## Setup You can run the smashed model with these steps: 0. Check cuda, torch, packaging requirements are installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. For packaging and torch, run `pip install packaging torch`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take 15 minutes to install. ```bash pip install pruna-engine[gpu] --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir prompthero-openjourney-turbo-green-smashed huggingface-cli download PrunaAI/prompthero-openjourney-turbo-green-smashed --local-dir prompthero-openjourney-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "prompthero-openjourney-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "prompthero-openjourney-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=512, width=512)[0][0] # Run the model where x is the expected input of. ``` ## Configurations The configuration info are in `config.json`. ## License We follow the same license as the original model. Please check the license of the original model prompthero/openjourney before using this model. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
PrunaAI/ultraspice-turbo-green-smashed
PrunaAI
2024-11-13T13:15:06Z
5
1
pruna-engine
[ "pruna-engine", "region:us" ]
null
2024-03-05T23:52:43Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir ultraspice-turbo-green-smashed huggingface-cli download PrunaAI/ultraspice-turbo-green-smashed --local-dir ultraspice-turbo-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "ultraspice-turbo-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "ultraspice-turbo-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=1024, width=1024)[0][0] # Run the model where x is the expected input the model. ``` ## Configurations The configuration info are in `model/smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model https://civitai.com/api/download/models/342732 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
mav23/Bielik-11B-v2.0-Instruct-GGUF
mav23
2024-11-13T13:11:05Z
307
0
transformers
[ "transformers", "gguf", "finetuned", "pl", "arxiv:2005.01643", "arxiv:2309.11235", "arxiv:2006.09092", "arxiv:2410.18565", "base_model:speakleash/Bielik-11B-v2", "base_model:quantized:speakleash/Bielik-11B-v2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-13T11:51:06Z
--- license: apache-2.0 base_model: speakleash/Bielik-11B-v2 language: - pl library_name: transformers tags: - finetuned inference: parameters: temperature: 0.2 widget: - messages: - role: user content: Co przedstawia polskie godล‚o? extra_gated_description: If you want to learn more about how you can use the model, please refer to our <a href="https://bielik.ai/terms/">Terms of Use</a>. --- <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct/raw/main/speakleash_cyfronet.png"> </p> # Bielik-11B-v2.0-Instruct Bielik-11B-v2.0-Instruct is a generative text model featuring 11 billion parameters. It is an instruct fine-tuned version of the [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2). Forementioned model stands as a testament to the unique collaboration between the open-science/open-souce project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on Polish text corpora, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH. The creation and training of the Bielik-11B-v2.0-Instruct was propelled by the support of computational grant number PLG/2024/016951, conducted on the Athena and Helios supercomputer, enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes. As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision. ๐Ÿ—ฃ๏ธ Chat Arena<span style="color:red;">*</span>: https://arena.speakleash.org.pl/ <span style="color:red;">*</span>Chat Arena is a platform for testing and comparing different AI language models, allowing users to evaluate their performance and quality. ## Model The [SpeakLeash](https://speakleash.org/) team is working on their own set of instructions in Polish, which is continuously being expanded and refined by annotators. A portion of these instructions, which had been manually verified and corrected, has been utilized for training purposes. Moreover, due to the limited availability of high-quality instructions in Polish, synthetic instructions were generated with [Mixtral 8x22B](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) and used in training. The dataset used for training comprised over 16 million instructions, consisting of more than 8 billion tokens. The instructions varied in quality, leading to a deterioration in the modelโ€™s performance. To counteract this while still allowing ourselves to utilize the aforementioned datasets, several improvements were introduced: * Weighted tokens level loss - a strategy inspired by [offline reinforcement learning](https://arxiv.org/abs/2005.01643) and [C-RLFT](https://arxiv.org/abs/2309.11235) * Adaptive learning rate inspired by the study on [Learning Rates as a Function of Batch Size](https://arxiv.org/abs/2006.09092) * Masked prompt tokens Bielik-11B-v2.0-Instruct has been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo) implemented by [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/). This framework allows users to train language models with architecture similar to LLaMA and Mistral in fast and efficient way. ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) * **Model ref:** speakleash:16d24fc7821149765826d22f335eee5f ### Quantized models: We know that some people want to explore smaller models or don't have the resources to run a full model. Therefore, we have prepared quantized versions of the Bielik-11B-v2.0-Instruct model in separate repositories: - [GGUF - Q4_K_M, Q5_K_M, Q6_K, Q8_0](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct-GGUF) - [GPTQ - 4bit](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct-GPTQ) - [FP8](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct-FP8) (vLLM, SGLang - Ada Lovelace, Hopper optimized) - [GGUF - experimental - IQ imatrix IQ1_M, IQ2_XXS, IQ3_XXS, IQ4_XS and calibrated Q4_K_M, Q5_K_M, Q6_K, Q8_0](https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct-GGUF-IQ-Imatrix) Please note that quantized models may offer lower quality of generated answers compared to full sized variatns. ### Chat template Bielik-11B-v2.0-Instruct uses [ChatML](https://github.com/cognitivecomputations/OpenChatML) as the prompt format. E.g. ``` prompt = "<s><|im_start|> user\nJakie mamy pory roku?<|im_end|> \n<|im_start|> assistant\n" completion = "W Polsce mamy 4 pory roku: wiosna, lato, jesieล„ i zima.<|im_end|> \n" ``` This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model_name = "speakleash/Bielik-11B-v2.0-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) messages = [ {"role": "system", "content": "Odpowiadaj krรณtko, precyzyjnie i wyล‚ฤ…cznie w jฤ™zyku polskim."}, {"role": "user", "content": "Jakie mamy pory roku w Polsce?"}, {"role": "assistant", "content": "W Polsce mamy 4 pory roku: wiosna, lato, jesieล„ i zima."}, {"role": "user", "content": "Ktรณra jest najcieplejsza?"} ] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = input_ids.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` Fully formated input conversation by apply_chat_template from previous example: ``` <s><|im_start|> system Odpowiadaj krรณtko, precyzyjnie i wyล‚ฤ…cznie w jฤ™zyku polskim.<|im_end|> <|im_start|> user Jakie mamy pory roku w Polsce?<|im_end|> <|im_start|> assistant W Polsce mamy 4 pory roku: wiosna, lato, jesieล„ i zima.<|im_end|> <|im_start|> user Ktรณra jest najcieplejsza?<|im_end|> ``` ## Evaluation Bielik-11B-v2.0-Instruct has been evaluated on several benchmarks to assess its performance across various tasks and languages. These benchmarks include: 1. Open PL LLM Leaderboard 2. Open LLM Leaderboard 3. Polish MT-Bench 4. Polish EQ-Bench (Emotional Intelligence Benchmark) 5. MixEval The following sections provide detailed results for each of these benchmarks, demonstrating the model's capabilities in both Polish and English language tasks. ### Open PL LLM Leaderboard Models have been evaluated on [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) 5-shot. The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Average column is an average score among all tasks normalized by baseline scores. | Model | Parameters (B)| Average | |---------------------------------|------------|---------| | Meta-Llama-3.1-405B-Instruct-FP8,API | 405 | 69.44 | | Mistral-Large-Instruct-2407 | 123 | 69.11 | | Qwen2-72B-Instruct | 72 | 65.87 | | Bielik-11B-v2.2-Instruct | 11 | 65.57 | | Meta-Llama-3.1-70B-Instruct | 70 | 65.49 | | Bielik-11B-v2.1-Instruct | 11 | 65.45 | | Mixtral-8x22B-Instruct-v0.1 | 141 | 65.23 | | **Bielik-11B-v2.0-Instruct** | **11** | **64.98** | | Meta-Llama-3-70B-Instruct | 70 | 64.45 | | Athene-70B | 70 | 63.65 | | WizardLM-2-8x22B | 141 | 62.35 | | Qwen1.5-72B-Chat | 72 | 58.67 | | Qwen2-57B-A14B-Instruct | 57 | 56.89 | | glm-4-9b-chat | 9 | 56.61 | | aya-23-35B | 35 | 56.37 | | Phi-3.5-MoE-instruct | 41.9 | 56.34 | | openchat-3.5-0106-gemma | 7 | 55.69 | | Mistral-Nemo-Instruct-2407 | 12 | 55.27 | | SOLAR-10.7B-Instruct-v1.0 | 10.7 | 55.24 | | Mixtral-8x7B-Instruct-v0.1 | 46.7 | 55.07 | | Bielik-7B-Instruct-v0.1 | 7 | 44.70 | | trurl-2-13b-academic | 13 | 36.28 | | trurl-2-7b | 7 | 26.93 | The results from the Open PL LLM Leaderboard demonstrate the exceptional performance of Bielik-11B-v2.0-Instruct: 1. Superior performance in its class: Bielik-11B-v2.0-Instruct outperforms all other models with less than 70B parameters. This is a significant achievement, showcasing its efficiency and effectiveness despite having fewer parameters than many competitors. 2. Competitive with larger models: with a score of 64.98, Bielik-11B-v2.0-Instruct performs on par with models in the 70B parameter range. This indicates that it achieves comparable results to much larger models, demonstrating its advanced architecture and training methodology. 3. Substantial improvement over previous version: the model shows a marked improvement over its predecessor, Bielik-7B-Instruct-v0.1, which scored 43.64. This leap in performance highlights the successful enhancements and optimizations implemented in this newer version. 4. Leading position for Polish language models: in the context of Polish language models, Bielik-11B-v2 Instruct stands out as a leader. There are no other competitive models specifically tailored for the Polish language that match its performance, making it a crucial resource for Polish NLP tasks. These results underscore Bielik-11B-v2.0-Instruct's position as a state-of-the-art model for Polish language processing, offering high performance with relatively modest computational requirements. #### Open PL LLM Leaderboard - Generative Tasks Performance This section presents a focused comparison of generative Polish language task performance between Bielik models and GPT-3.5. The evaluation is limited to generative tasks due to the constraints of assessing OpenAI models. The comprehensive nature and associated costs of the benchmark explain the limited number of models evaluated. | Model | Parameters (B) | Average g | |-------------------------------|----------------|---------------| | Bielik-11B-v2.1-Instruct | 11 | 66.58 | | Bielik-11B-v2.2-Instruct | 11 | 66.11 | | **Bielik-11B-v2.0-Instruct** | 11 | **65.58** | | gpt-3.5-turbo-instruct | Unknown | 55.65 | The performance variation among Bielik versions is minimal, indicating consistent quality across iterations. Bielik-11B-v2.1-Instruct demonstrates an impressive 17.8% performance advantage over GPT-3.5. ### Open LLM Leaderboard The Open LLM Leaderboard evaluates models on various English language tasks, providing insights into the model's performance across different linguistic challenges. | Model | AVG | arc_challenge | hellaswag | truthfulqa_mc2 | mmlu | winogrande | gsm8k | |--------------------------|-------|---------------|-----------|----------------|-------|------------|-------| | Bielik-11B-v2.2-Instruct | 69.86 | 59.90 | 80.16 | 58.34 | 64.34 | 75.30 | 81.12 | | Bielik-11B-v2.1-Instruct | 69.82 | 59.56 | 80.20 | 59.35 | 64.18 | 75.06 | 80.59 | | **Bielik-11B-v2.0-Instruct** | **68.04** | 58.62 | 78.65 | 54.65 | 63.71 | 76.32 | 76.27 | | Bielik-11B-v2 | 65.87 | 60.58 | 79.84 | 46.13 | 63.06 | 77.82 | 67.78 | | Mistral-7B-Instruct-v0.2 | 65.71 | 63.14 | 84.88 | 68.26 | 60.78 | 77.19 | 40.03 | | Bielik-7B-Instruct-v0.1 | 51.26 | 47.53 | 68.91 | 49.47 | 46.18 | 65.51 | 29.95 | Bielik-11B-v2.0-Instruct shows impressive performance on English language tasks: 1. Improvement over its base model (2-point increase). 2. Substantial 16-point improvement over Bielik-7B-Instruct-v0.1. These results demonstrate Bielik-11B-v2.0-Instruct's versatility in both Polish and English, highlighting the effectiveness of its instruction tuning process. ### Polish MT-Bench The Bielik-11B-v2.0-Instruct (16 bit) model was also evaluated using the MT-Bench benchmark. The quality of the model was evaluated using the English version (original version without modifications) and the Polish version created by Speakleash (tasks and evaluation in Polish, the content of the tasks was also changed to take into account the context of the Polish language). #### MT-Bench English | Model | Score | |-----------------|----------| | Bielik-11B-v2.1 | 8.537500 | | Bielik-11B-v2.2 | 8.390625 | | **Bielik-11B-v2.0** | **8.159375** | #### MT-Bench Polish | Model | Parameters (B) | Score | |-------------------------------------|----------------|----------| | Qwen2-72B-Instruct | 72 | 8.775000 | | Mistral-Large-Instruct-2407 (123B) | 123 | 8.662500 | | gemma-2-27b-it | 27 | 8.618750 | | Mixtral-8x22b | 141 | 8.231250 | | Meta-Llama-3.1-405B-Instruct | 405 | 8.168750 | | Meta-Llama-3.1-70B-Instruct | 70 | 8.150000 | | Bielik-11B-v2.2-Instruct | 11 | 8.115625 | | Bielik-11B-v2.1-Instruct | 11 | 7.996875 | | gpt-3.5-turbo | Unknown | 7.868750 | | Mixtral-8x7b | 46.7 | 7.637500 | | **Bielik-11B-v2.0-Instruct** | **11** | **7.562500** | | Mistral-Nemo-Instruct-2407 | 12 | 7.368750 | | openchat-3.5-0106-gemma | 7 | 6.812500 | | Mistral-7B-Instruct-v0.2 | 7 | 6.556250 | | Meta-Llama-3.1-8B-Instruct | 8 | 6.556250 | | Bielik-7B-Instruct-v0.1 | 7 | 6.081250 | | Mistral-7B-Instruct-v0.3 | 7 | 5.818750 | | Polka-Mistral-7B-SFT | 7 | 4.518750 | | trurl-2-7b | 7 | 2.762500 | For more information - answers to test tasks and values in each category, visit the [MT-Bench PL](https://huggingface.co/spaces/speakleash/mt-bench-pl) website. ### Polish EQ-Bench [Polish Emotional Intelligence Benchmark for LLMs](https://huggingface.co/spaces/speakleash/polish_eq-bench) | Model | Parameters (B) | Score | |-------------------------------|--------|-------| | Mistral-Large-Instruct-2407 | 123 | 78.07 | | Meta-Llama-3.1-405B-Instruct-FP8 | 405 | 77.23 | | gpt-4o-2024-08-06 | ? | 75.15 | | gpt-4-turbo-2024-04-09 | ? | 74.59 | | Meta-Llama-3.1-70B-Instruct | 70 | 72.53 | | Qwen2-72B-Instruct | 72 | 71.23 | | Meta-Llama-3-70B-Instruct | 70 | 71.21 | | gpt-4o-mini-2024-07-18 | ? | 71.15 | | WizardLM-2-8x22B | 141 | 69.56 | | Bielik-11B-v2.2-Instruct | 11 | 69.05 | | **Bielik-11B-v2.0-Instruct** | **11** | **68.24** | | Qwen1.5-72B-Chat | 72 | 68.03 | | Mixtral-8x22B-Instruct-v0.1 | 141 | 67.63 | | Bielik-11B-v2.1-Instruct | 11 | 60.07 | | Qwen1.5-32B-Chat | 32 | 59.63 | | openchat-3.5-0106-gemma | 7 | 59.58 | | aya-23-35B | 35 | 58.41 | | gpt-3.5-turbo | ? | 57.7 | | Qwen2-57B-A14B-Instruct | 57 | 57.64 | | Mixtral-8x7B-Instruct-v0.1 | 47 | 57.61 | | SOLAR-10.7B-Instruct-v1.0 | 10.7 | 55.21 | | Mistral-7B-Instruct-v0.2 | 7 | 47.02 | ### MixEval MixEval is a ground-truth-based English benchmark designed to evaluate Large Language Models (LLMs) efficiently and effectively. Key features of MixEval include: 1. Derived from off-the-shelf benchmark mixtures 2. Highly capable model ranking with a 0.96 correlation to Chatbot Arena 3. Local and quick execution, requiring only 6% of the time and cost compared to running MMLU This benchmark provides a robust and time-efficient method for assessing LLM performance, making it a valuable tool for ongoing model evaluation and comparison. | Model | MixEval | MixEval-Hard | |-------------------------------|---------|--------------| | Bielik-11B-v2.1-Instruct | 74.55 | 45.00 | | Bielik-11B-v2.2-Instruct | 72.35 | 39.65 | | **Bielik-11B-v2.0-Instruct** | **72.10** | **40.20** | | Mistral-7B-Instruct-v0.2 | 70.00 | 36.20 | The results show that Bielik-11B-v2.0-Instruct performs well on the MixEval benchmark, achieving a score of 72.10 on the standard MixEval and 40.20 on MixEval-Hard. Notably, Bielik-11B-v2.0-Instruct significantly outperforms Mistral-7B-Instruct-v0.2 on both metrics, demonstrating its improved capabilities despite being based on a similar architecture. ### Chat Arena PL Chat Arena PL is a human-evaluated benchmark that provides a direct comparison of model performance through head-to-head battles. Unlike the automated benchmarks mentioned above, this evaluation relies on human judgment to assess the quality and effectiveness of model responses. The results offer valuable insights into how different models perform in real-world, conversational scenarios as perceived by human evaluators. Results accessed on 2024-08-26. | # | Model | Battles | Won | Lost | Draws | Win % | ELO | |---|-------|-------|---------|-----------|--------|-------------|-----| | 1 | Bielik-11B-v2.2-Instruct | 92 | 72 | 14 | 6 | 83.72% | 1234 | | 2 | Bielik-11B-v2.1-Instruct | 240 | 171 | 50 | 19 | 77.38% | 1174 | | 3 | gpt-4o-mini | 639 | 402 | 117 | 120 | 77.46% | 1141 | | 4 | Mistral Large 2 (2024-07) | 324 | 188 | 69 | 67 | 73.15% | 1125 | | 5 | Llama-3.1-405B | 548 | 297 | 144 | 107 | 67.35% | 1090 | | 6 | **Bielik-11B-v2.0-Instruct** | 1289 | 695 | 352 | 242 | 66.38% | 1059 | | 7 | Llama-3.1-70B | 498 | 221 | 187 | 90 | 54.17% | 1033 | | 8 | Bielik-1-7B | 2041 | 1029 | 638 | 374 | 61.73% | 1020 | | 9 | Mixtral-8x22B-v0.1 | 432 | 166 | 167 | 99 | 49.85% | 1018 | | 10 | Qwen2-72B | 451 | 179 | 177 | 95 | 50.28% | 1011 | | 11 | gpt-3.5-turbo | 2186 | 1007 | 731 | 448 | 57.94% | 1008 | | 12 | Llama-3.1-8B | 440 | 155 | 227 | 58 | 40.58% | 975 | | 13 | Mixtral-8x7B-v0.1 | 1997 | 794 | 804 | 399 | 49.69% | 973 | | 14 | Llama-3-70b | 2008 | 733 | 909 | 366 | 44.64% | 956 | | 15 | Mistral Nemo (2024-07) | 301 | 84 | 164 | 53 | 33.87% | 954 | | 16 | Llama-3-8b | 1911 | 473 | 1091 | 347 | 30.24% | 909 | | 17 | gemma-7b-it | 1928 | 418 | 1221 | 289 | 25.5% | 888 | ## Limitations and Biases Bielik-11B-v2.0-Instruct is a quick demonstration that the base model can be easily fine-tuned to achieve compelling and promising performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community in ways to make the model respect guardrails, allowing for deployment in environments requiring moderated outputs. Bielik-11B-v2.0-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-11B-v2.0-Instruct was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs. ## Citation Please cite this model using the following format: ``` @misc{Bielik11Bv20i, title = {Bielik-11B-v2.0-Instruct model card}, author = {Ociepa, Krzysztof and Flis, ลukasz and Kinas, Remigiusz and Gwoลบdziej, Adrian and Wrรณbel, Krzysztof and {SpeakLeash Team} and {Cyfronet Team}}, year = {2024}, url = {https://huggingface.co/speakleash/Bielik-11B-v2.0-Instruct}, note = {Accessed: 2024-09-10}, % change this date urldate = {2024-09-10} % change this date } @unpublished{Bielik11Bv20a, author = {Ociepa, Krzysztof and Flis, ลukasz and Kinas, Remigiusz and Gwoลบdziej, Adrian and Wrรณbel, Krzysztof}, title = {Bielik: A Family of Large Language Models for the Polish Language - Development, Insights, and Evaluation}, year = {2024}, } @misc{ociepa2024bielik7bv01polish, title={Bielik 7B v0.1: A Polish Language Model -- Development, Insights, and Evaluation}, author={Krzysztof Ociepa and ลukasz Flis and Krzysztof Wrรณbel and Adrian Gwoลบdziej and Remigiusz Kinas}, year={2024}, eprint={2410.18565}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.18565}, } ``` ## Responsible for training the model * [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, data preparation, process optimization and oversight of training * [ลukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)<sup>Cyfronet AGH</sup> - coordinating and supervising the training * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - conceptualizing and coordinating DPO training, data preparation * [Adrian Gwoลบdziej](https://www.linkedin.com/in/adrgwo/)<sup>SpeakLeash</sup> - data preparation and ensuring data quality * [Krzysztof Wrรณbel](https://www.linkedin.com/in/wrobelkrzysztof/)<sup>SpeakLeash</sup> - benchmarks The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model: [Sebastian Kondracki](https://www.linkedin.com/in/sebastian-kondracki/), [Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/), [Paweล‚ Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/), [Szymon Baczyล„ski](https://www.linkedin.com/in/szymon-baczynski/), [Jacek Chwiล‚a](https://www.linkedin.com/in/jacek-chwila/), [Maria Filipkowska](https://www.linkedin.com/in/maria-filipkowska/), [Jan Maria Kowalski](https://www.linkedin.com/in/janmariakowalski/), [Karol Jezierski](https://www.linkedin.com/in/karol-jezierski/), [Kacper Milan](https://www.linkedin.com/in/kacper-milan/), [Jan Sowa](https://www.linkedin.com/in/janpiotrsowa/), [Len Krawczyk](https://www.linkedin.com/in/magdalena-krawczyk-7810942ab/), [Marta Seidler](https://www.linkedin.com/in/marta-seidler-751102259/), [Agnieszka Ratajska](https://www.linkedin.com/in/agnieszka-ratajska/), [Krzysztof Koziarek](https://www.linkedin.com/in/krzysztofkoziarek/), [Szymon Pepliล„ski](http://linkedin.com/in/szymonpeplinski/), [Zuzanna Dabiฤ‡](https://www.linkedin.com/in/zuzanna-dabic/), [Filip Bogacz](https://linkedin.com/in/Fibogacci), [Agnieszka Kosiak](https://www.linkedin.com/in/agn-kosiak), [Izabela Babis](https://www.linkedin.com/in/izabela-babis-2274b8105/), [Nina Babis](https://www.linkedin.com/in/nina-babis-00055a140/). Members of the ACK Cyfronet AGH team providing valuable support and expertise: [Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/), [Marek Magryล›](https://www.linkedin.com/in/magrys/), [Mieszko Cholewa ](https://www.linkedin.com/in/mieszko-cholewa-613726301/). ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/pv4brQMDTy).
PrunaAI/SG161222-Realistic_Vision_V1.4-turbo-tiny-green-smashed
PrunaAI
2024-11-13T13:07:58Z
12
2
pruna-engine
[ "pruna-engine", "license:apache-2.0", "region:us" ]
null
2024-02-12T13:24:46Z
--- license: apache-2.0 library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) <div style="color: #9B1DBE; font-size: 2em; font-weight: bold;"> Deprecation Notice: This model is deprecated and will no longer receive updates. </div> <br><br> # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by combining xformers, triton, jit, cuda graphs, tiling, and step caching. - ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. ## Setup You can run the smashed model with these steps: 0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`. 1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install. ```bash pip install pruna-engine[gpu]==0.6.0 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/ ``` 3. Download the model files using one of these three options. - Option 1 - Use command line interface (CLI): ```bash mkdir SG161222-Realistic_Vision_V1.4-turbo-tiny-green-smashed huggingface-cli download PrunaAI/SG161222-Realistic_Vision_V1.4-turbo-tiny-green-smashed --local-dir SG161222-Realistic_Vision_V1.4-turbo-tiny-green-smashed --local-dir-use-symlinks False ``` - Option 2 - Use Python: ```python import subprocess repo_name = "SG161222-Realistic_Vision_V1.4-turbo-tiny-green-smashed" subprocess.run(["mkdir", repo_name]) subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"]) ``` - Option 3 - Download them manually on the HuggingFace model page. 3. Load & run the model. ```python from pruna_engine.PrunaModel import PrunaModel model_path = "SG161222-Realistic_Vision_V1.4-turbo-tiny-green-smashed/model" # Specify the downloaded model path. smashed_model = PrunaModel.load_model(model_path) # Load the model. smashed_model(prompt='Beautiful fruits in trees', height=512, width=512)[0][0] # Run the model where x is the expected input of. ``` ## Configurations The configuration info are in `config.json`. ## Credits & License We follow the same license as the original model. Please check the license of the original model SG161222/Realistic_Vision_V1.4 before using this model which provided the base model. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
cuongdev/7nguoi-7000
cuongdev
2024-11-13T12:58:39Z
32
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-11-13T12:54:52Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### 7nguoi-7000 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
el3oss/Llama-3-8B-Instruct-defect-fix2
el3oss
2024-11-13T12:53:45Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-13T12:49:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
deepnet/SN29-C00-llama-HK2Nw-1
deepnet
2024-11-13T12:53:01Z
36
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-13T12:30:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ISAdoraym/dreambooth-sd3-lora1
ISAdoraym
2024-11-13T12:52:24Z
5
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "sd3", "sd3-diffusers", "template:sd-lora", "base_model:stabilityai/stable-diffusion-3-medium-diffusers", "base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers", "license:openrail++", "region:us" ]
text-to-image
2024-11-13T06:41:09Z
--- base_model: stabilityai/stable-diffusion-3-medium-diffusers library_name: diffusers license: openrail++ tags: - text-to-image - diffusers-training - diffusers - lora - sd3 - sd3-diffusers - template:sd-lora instance_prompt: a photo of cool shirt widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SD3 DreamBooth LoRA - ISAdoraym/dreambooth-sd3-lora1 <Gallery /> ## Model description These are ISAdoraym/dreambooth-sd3-lora1 DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `a photo of cool shirt` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](ISAdoraym/dreambooth-sd3-lora1/tree/main) in the Files & versions tab. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-3-medium-diffusers', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ISAdoraym/dreambooth-sd3-lora1', weight_name='pytorch_lora_weights.safetensors') image = pipeline('A photo of cool shirt').images[0] ``` ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`diffusers_lora_weights.safetensors` here ๐Ÿ’พ](/ISAdoraym/dreambooth-sd3-lora1/blob/main/diffusers_lora_weights.safetensors)**. - Rename it and place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
BlinkDL/rwkv-6-world
BlinkDL
2024-11-13T12:51:56Z
0
144
null
[ "pytorch", "text-generation", "causal-lm", "rwkv", "en", "zh", "fr", "es", "de", "pt", "ru", "it", "ja", "ko", "vi", "ar", "dataset:cerebras/SlimPajama-627B", "dataset:EleutherAI/pile", "dataset:bigcode/starcoderdata", "dataset:oscar-corpus/OSCAR-2301", "arxiv:2404.05892", "license:apache-2.0", "region:us" ]
text-generation
2024-02-08T17:47:54Z
--- language: - en - zh - fr - es - de - pt - ru - it - ja - ko - vi - ar tags: - pytorch - text-generation - causal-lm - rwkv license: apache-2.0 datasets: - cerebras/SlimPajama-627B - EleutherAI/pile - bigcode/starcoderdata - oscar-corpus/OSCAR-2301 --- # RWKV-6 World RWKV-6 paper: https://arxiv.org/abs/2404.05892 Use rwkv pip package 0.8.24+ for RWKV-6 inference: https://pypi.org/project/rwkv/ (pipeline = PIPELINE(model, "rwkv_vocab_v20230424") for rwkv-world models) Online Demo 1: https://huggingface.co/spaces/BlinkDL/RWKV-Gradio-2 Online Demo 2: https://huggingface.co/spaces/BlinkDL/RWKV-Gradio-1 GUI: https://github.com/josStorer/RWKV-Runner (see Releases) For developer: https://github.com/BlinkDL/ChatRWKV/blob/main/API_DEMO_CHAT.py https://github.com/BlinkDL/ChatRWKV/blob/main/RWKV_v6_demo.py https://www.rwkv.com/ RWKV-6 7B v3 MMLU = 54.2% (using the same "47.9%" code) RWKV-6 7B v2.1 MMLU = 47.9%: https://github.com/Jellyfish042/rwkv_mmlu RWKV-6 0.1B (using pythia-160m tokenizer): https://huggingface.co/BlinkDL/temp-latest-training-models/blob/main/temp/rwkv-x060-173m-pile-20240515-ctx4k.pth ## Model Description RWKV-6 trained on 100+ world languages (70% English, 15% multilang, 15% code). World = Some_Pile + Some_SlimPajama + Some_StarCoder + Some_OSCAR + All_Wikipedia + All_ChatGPT_Data_I_can_find World v1 = 0.59T tokens World v2 = 1.12T tokens World v2.1 = 1.42T tokens Recommended fine-tuning format (use \n for newlines): ``` User: xxxxxxxxxxxxxxx Assistant: xxxxxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxxxxxxxxx User: xxxxxxxxxxxxxxx xxxxxxxxxxxxxxx Assistant: xxxxxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxxxxxxxxx xxxxxxxxxxxxxxx ``` A good chat prompt (better replace \n\n in xxx to \n, such that there will be no newlines in xxx): ``` User: hi Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it. User: xxx Assistant: ``` QA prompt (better replace \n\n in xxx to \n, such that there will be no newlines in xxx): ``` Question: xxx Answer: ``` and ``` Instruction: xxx Input: xxx Response: ``` !!! There should not be any space after your final ":" or you will upset the tokenizer and see non-English reponse !!! !!! There should not be any space after your final ":" or you will upset the tokenizer and see non-English reponse !!! !!! There should not be any space after your final ":" or you will upset the tokenizer and see non-English reponse !!!
personal1802/ntrMIXIllustriousXL_v21
personal1802
2024-11-13T12:50:47Z
11
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:Raelina/Raehoshi-illust-XL", "base_model:adapter:Raelina/Raehoshi-illust-XL", "region:us" ]
text-to-image
2024-11-13T12:40:08Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/WHITE.png base_model: Raelina/Raehoshi-illust-XL instance_prompt: null --- # ntrMIXIllustriousXL_v21 <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/personal1802/ntrMIXIllustriousXL_v21/tree/main) them in the Files & versions tab.
mikasenghaas/gpt2-xl-fresh
mikasenghaas
2024-11-13T12:43:25Z
6
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2024-11-13T12:39:06Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
mikasenghaas/gpt2-large-fresh
mikasenghaas
2024-11-13T12:38:34Z
12
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2024-11-13T12:37:30Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
khilan-crest/twitter-roberta-base-sentiment-latest_13112024T162211
khilan-crest
2024-11-13T12:33:25Z
107
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:cardiffnlp/twitter-roberta-base-sentiment-latest", "base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-13T12:31:50Z
--- library_name: transformers base_model: cardiffnlp/twitter-roberta-base-sentiment-latest tags: - generated_from_trainer metrics: - f1 model-index: - name: twitter-roberta-base-sentiment-latest_13112024T162211 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-sentiment-latest_13112024T162211 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2174 - F1: 0.6307 - Learning Rate: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Rate | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | No log | 1.0 | 315 | 1.0491 | 0.5960 | 0.0000 | | 1.2342 | 2.0 | 630 | 0.9992 | 0.6537 | 0.0000 | | 1.2342 | 3.0 | 945 | 1.1168 | 0.6244 | 0.0000 | | 0.7754 | 4.0 | 1260 | 1.1775 | 0.6337 | 0.0000 | | 0.5224 | 5.0 | 1575 | 1.2174 | 0.6307 | 0.0 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
MarsupialAI/UnslopNemo-12B-v3_EXL2_6bpw_H8
MarsupialAI
2024-11-13T12:32:00Z
8
1
null
[ "safetensors", "mistral", "6-bit", "exl2", "region:us" ]
null
2024-11-13T12:24:49Z
6.0bpw EXL2 quant of https://huggingface.co/TheDrummer/UnslopNemo-12B-v3 8bit heads. Default measurement dataset.
prostponer/assmann
prostponer
2024-11-13T12:24:38Z
6
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-11-13T11:48:17Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: assmann --- # Assmann <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `assmann` to trigger the image generation. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('prostponer/assmann', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
AhmaadAwais/opt-125m-gptq
AhmaadAwais
2024-11-13T12:20:19Z
82
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-11-13T12:20:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nguyenphuthien/flan-t5-large-Q4_K_M-GGUF
nguyenphuthien
2024-11-13T12:19:20Z
5
0
null
[ "gguf", "text2text-generation", "llama-cpp", "gguf-my-repo", "en", "fr", "ro", "de", "multilingual", "dataset:svakulenk0/qrecc", "dataset:taskmaster2", "dataset:djaym7/wiki_dialog", "dataset:deepmind/code_contests", "dataset:lambada", "dataset:gsm8k", "dataset:aqua_rat", "dataset:esnli", "dataset:quasc", "dataset:qed", "base_model:google/flan-t5-large", "base_model:quantized:google/flan-t5-large", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text2text-generation
2024-11-13T12:19:16Z
--- language: - en - fr - ro - de - multilingual widget: - text: 'Translate to German: My name is Arthur' example_title: Translation - text: Please answer to the following question. Who is going to be the next Ballon d'or? example_title: Question Answering - text: 'Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering.' example_title: Logical reasoning - text: Please answer the following question. What is the boiling point of Nitrogen? example_title: Scientific knowledge - text: Answer the following yes/no question. Can you write a whole Haiku in a single tweet? example_title: Yes/no question - text: Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet? example_title: Reasoning task - text: 'Q: ( False or not False or False ) is? A: Let''s think step by step' example_title: Boolean Expressions - text: The square root of x is the cube root of y. What is y to the power of 2, if x = 4? example_title: Math reasoning - text: 'Premise: At my age you will probably have learnt one lesson. Hypothesis: It''s not certain how many lessons you''ll learn by your thirties. Does the premise entail the hypothesis?' example_title: Premise and hypothesis tags: - text2text-generation - llama-cpp - gguf-my-repo datasets: - svakulenk0/qrecc - taskmaster2 - djaym7/wiki_dialog - deepmind/code_contests - lambada - gsm8k - aqua_rat - esnli - quasc - qed license: apache-2.0 base_model: google/flan-t5-large --- # nguyenphuthien/flan-t5-large-Q4_K_M-GGUF This model was converted to GGUF format from [`google/flan-t5-large`](https://huggingface.co/google/flan-t5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/google/flan-t5-large) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo nguyenphuthien/flan-t5-large-Q4_K_M-GGUF --hf-file flan-t5-large-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo nguyenphuthien/flan-t5-large-Q4_K_M-GGUF --hf-file flan-t5-large-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo nguyenphuthien/flan-t5-large-Q4_K_M-GGUF --hf-file flan-t5-large-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo nguyenphuthien/flan-t5-large-Q4_K_M-GGUF --hf-file flan-t5-large-q4_k_m.gguf -c 2048 ```
deepfile/multilingual-e5-small-onnx-qint8
deepfile
2024-11-13T12:17:58Z
38
1
sentence-transformers
[ "sentence-transformers", "onnx", "safetensors", "bert", "mteb", "Sentence Transformers", "sentence-similarity", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "license:mit", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-11-13T10:46:03Z
--- tags: - mteb - Sentence Transformers - sentence-similarity - sentence-transformers model-index: - name: multilingual-e5-small results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.79104477611939 - type: ap value: 36.9996434842022 - type: f1 value: 67.95453679103099 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (de) config: de split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 71.64882226980728 - type: ap value: 82.11942130026586 - type: f1 value: 69.87963421606715 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en-ext) config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 75.8095952023988 - type: ap value: 24.46869495579561 - type: f1 value: 63.00108480037597 - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (ja) config: ja split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 64.186295503212 - type: ap value: 15.496804690197042 - type: f1 value: 52.07153895475031 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 88.699325 - type: ap value: 85.27039559917269 - type: f1 value: 88.65556295032513 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 44.69799999999999 - type: f1 value: 43.73187348654165 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (de) config: de split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.245999999999995 - type: f1 value: 39.3863530637684 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (es) config: es split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 40.394 - type: f1 value: 39.301223469483446 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (fr) config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 38.864 - type: f1 value: 37.97974261868003 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (ja) config: ja split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 37.682 - type: f1 value: 37.07399369768313 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 37.504 - type: f1 value: 36.62317273874278 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 19.061 - type: map_at_10 value: 31.703 - type: map_at_100 value: 32.967 - type: map_at_1000 value: 33.001000000000005 - type: map_at_3 value: 27.466 - type: map_at_5 value: 29.564 - type: mrr_at_1 value: 19.559 - type: mrr_at_10 value: 31.874999999999996 - type: mrr_at_100 value: 33.146 - type: mrr_at_1000 value: 33.18 - type: mrr_at_3 value: 27.667 - type: mrr_at_5 value: 29.74 - type: ndcg_at_1 value: 19.061 - type: ndcg_at_10 value: 39.062999999999995 - type: ndcg_at_100 value: 45.184000000000005 - type: ndcg_at_1000 value: 46.115 - type: ndcg_at_3 value: 30.203000000000003 - type: ndcg_at_5 value: 33.953 - type: precision_at_1 value: 19.061 - type: precision_at_10 value: 6.279999999999999 - type: precision_at_100 value: 0.9129999999999999 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 12.706999999999999 - type: precision_at_5 value: 9.431000000000001 - type: recall_at_1 value: 19.061 - type: recall_at_10 value: 62.802 - type: recall_at_100 value: 91.323 - type: recall_at_1000 value: 98.72 - type: recall_at_3 value: 38.122 - type: recall_at_5 value: 47.155 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 39.22266660528253 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 30.79980849482483 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 57.8790068352054 - type: mrr value: 71.78791276436706 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 82.36328364043163 - type: cos_sim_spearman value: 82.26211536195868 - type: euclidean_pearson value: 80.3183865039173 - type: euclidean_spearman value: 79.88495276296132 - type: manhattan_pearson value: 80.14484480692127 - type: manhattan_spearman value: 80.39279565980743 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (de-en) config: de-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 98.0375782881002 - type: f1 value: 97.86012526096033 - type: precision value: 97.77139874739039 - type: recall value: 98.0375782881002 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (fr-en) config: fr-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 93.35241030156286 - type: f1 value: 92.66050333846944 - type: precision value: 92.3306919069631 - type: recall value: 93.35241030156286 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (ru-en) config: ru-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 94.0699688257707 - type: f1 value: 93.50236693222492 - type: precision value: 93.22791825424315 - type: recall value: 94.0699688257707 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (zh-en) config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 89.25750394944708 - type: f1 value: 88.79234684921889 - type: precision value: 88.57293312269616 - type: recall value: 89.25750394944708 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 79.41558441558442 - type: f1 value: 79.25886487487219 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 35.747820820329736 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 27.045143830596146 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.252999999999997 - type: map_at_10 value: 31.655916666666666 - type: map_at_100 value: 32.680749999999996 - type: map_at_1000 value: 32.79483333333334 - type: map_at_3 value: 29.43691666666666 - type: map_at_5 value: 30.717416666666665 - type: mrr_at_1 value: 28.602750000000004 - type: mrr_at_10 value: 35.56875 - type: mrr_at_100 value: 36.3595 - type: mrr_at_1000 value: 36.427749999999996 - type: mrr_at_3 value: 33.586166666666664 - type: mrr_at_5 value: 34.73641666666666 - type: ndcg_at_1 value: 28.602750000000004 - type: ndcg_at_10 value: 36.06933333333334 - type: ndcg_at_100 value: 40.70141666666667 - type: ndcg_at_1000 value: 43.24341666666667 - type: ndcg_at_3 value: 32.307916666666664 - type: ndcg_at_5 value: 34.129999999999995 - type: precision_at_1 value: 28.602750000000004 - type: precision_at_10 value: 6.097666666666667 - type: precision_at_100 value: 0.9809166666666668 - type: precision_at_1000 value: 0.13766666666666663 - type: precision_at_3 value: 14.628166666666667 - type: precision_at_5 value: 10.266916666666667 - type: recall_at_1 value: 24.252999999999997 - type: recall_at_10 value: 45.31916666666667 - type: recall_at_100 value: 66.03575000000001 - type: recall_at_1000 value: 83.94708333333334 - type: recall_at_3 value: 34.71941666666666 - type: recall_at_5 value: 39.46358333333333 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 9.024000000000001 - type: map_at_10 value: 15.644 - type: map_at_100 value: 17.154 - type: map_at_1000 value: 17.345 - type: map_at_3 value: 13.028 - type: map_at_5 value: 14.251 - type: mrr_at_1 value: 19.674 - type: mrr_at_10 value: 29.826999999999998 - type: mrr_at_100 value: 30.935000000000002 - type: mrr_at_1000 value: 30.987 - type: mrr_at_3 value: 26.645000000000003 - type: mrr_at_5 value: 28.29 - type: ndcg_at_1 value: 19.674 - type: ndcg_at_10 value: 22.545 - type: ndcg_at_100 value: 29.207 - type: ndcg_at_1000 value: 32.912 - type: ndcg_at_3 value: 17.952 - type: ndcg_at_5 value: 19.363 - type: precision_at_1 value: 19.674 - type: precision_at_10 value: 7.212000000000001 - type: precision_at_100 value: 1.435 - type: precision_at_1000 value: 0.212 - type: precision_at_3 value: 13.507 - type: precision_at_5 value: 10.397 - type: recall_at_1 value: 9.024000000000001 - type: recall_at_10 value: 28.077999999999996 - type: recall_at_100 value: 51.403 - type: recall_at_1000 value: 72.406 - type: recall_at_3 value: 16.768 - type: recall_at_5 value: 20.737 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.012 - type: map_at_10 value: 17.138 - type: map_at_100 value: 24.146 - type: map_at_1000 value: 25.622 - type: map_at_3 value: 12.552 - type: map_at_5 value: 14.435 - type: mrr_at_1 value: 62.25000000000001 - type: mrr_at_10 value: 71.186 - type: mrr_at_100 value: 71.504 - type: mrr_at_1000 value: 71.514 - type: mrr_at_3 value: 69.333 - type: mrr_at_5 value: 70.408 - type: ndcg_at_1 value: 49.75 - type: ndcg_at_10 value: 37.76 - type: ndcg_at_100 value: 42.071 - type: ndcg_at_1000 value: 49.309 - type: ndcg_at_3 value: 41.644 - type: ndcg_at_5 value: 39.812999999999995 - type: precision_at_1 value: 62.25000000000001 - type: precision_at_10 value: 30.15 - type: precision_at_100 value: 9.753 - type: precision_at_1000 value: 1.9189999999999998 - type: precision_at_3 value: 45.667 - type: precision_at_5 value: 39.15 - type: recall_at_1 value: 8.012 - type: recall_at_10 value: 22.599 - type: recall_at_100 value: 48.068 - type: recall_at_1000 value: 71.328 - type: recall_at_3 value: 14.043 - type: recall_at_5 value: 17.124 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 42.455 - type: f1 value: 37.59462649781862 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 58.092 - type: map_at_10 value: 69.586 - type: map_at_100 value: 69.968 - type: map_at_1000 value: 69.982 - type: map_at_3 value: 67.48100000000001 - type: map_at_5 value: 68.915 - type: mrr_at_1 value: 62.166 - type: mrr_at_10 value: 73.588 - type: mrr_at_100 value: 73.86399999999999 - type: mrr_at_1000 value: 73.868 - type: mrr_at_3 value: 71.6 - type: mrr_at_5 value: 72.99 - type: ndcg_at_1 value: 62.166 - type: ndcg_at_10 value: 75.27199999999999 - type: ndcg_at_100 value: 76.816 - type: ndcg_at_1000 value: 77.09700000000001 - type: ndcg_at_3 value: 71.36 - type: ndcg_at_5 value: 73.785 - type: precision_at_1 value: 62.166 - type: precision_at_10 value: 9.716 - type: precision_at_100 value: 1.065 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 28.278 - type: precision_at_5 value: 18.343999999999998 - type: recall_at_1 value: 58.092 - type: recall_at_10 value: 88.73400000000001 - type: recall_at_100 value: 95.195 - type: recall_at_1000 value: 97.04599999999999 - type: recall_at_3 value: 78.45 - type: recall_at_5 value: 84.316 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 16.649 - type: map_at_10 value: 26.457000000000004 - type: map_at_100 value: 28.169 - type: map_at_1000 value: 28.352 - type: map_at_3 value: 23.305 - type: map_at_5 value: 25.169000000000004 - type: mrr_at_1 value: 32.407000000000004 - type: mrr_at_10 value: 40.922 - type: mrr_at_100 value: 41.931000000000004 - type: mrr_at_1000 value: 41.983 - type: mrr_at_3 value: 38.786 - type: mrr_at_5 value: 40.205999999999996 - type: ndcg_at_1 value: 32.407000000000004 - type: ndcg_at_10 value: 33.314 - type: ndcg_at_100 value: 40.312 - type: ndcg_at_1000 value: 43.685 - type: ndcg_at_3 value: 30.391000000000002 - type: ndcg_at_5 value: 31.525 - type: precision_at_1 value: 32.407000000000004 - type: precision_at_10 value: 8.966000000000001 - type: precision_at_100 value: 1.6019999999999999 - type: precision_at_1000 value: 0.22200000000000003 - type: precision_at_3 value: 20.165 - type: precision_at_5 value: 14.722 - type: recall_at_1 value: 16.649 - type: recall_at_10 value: 39.117000000000004 - type: recall_at_100 value: 65.726 - type: recall_at_1000 value: 85.784 - type: recall_at_3 value: 27.914 - type: recall_at_5 value: 33.289 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 36.253 - type: map_at_10 value: 56.16799999999999 - type: map_at_100 value: 57.06099999999999 - type: map_at_1000 value: 57.126 - type: map_at_3 value: 52.644999999999996 - type: map_at_5 value: 54.909 - type: mrr_at_1 value: 72.505 - type: mrr_at_10 value: 79.66 - type: mrr_at_100 value: 79.869 - type: mrr_at_1000 value: 79.88 - type: mrr_at_3 value: 78.411 - type: mrr_at_5 value: 79.19800000000001 - type: ndcg_at_1 value: 72.505 - type: ndcg_at_10 value: 65.094 - type: ndcg_at_100 value: 68.219 - type: ndcg_at_1000 value: 69.515 - type: ndcg_at_3 value: 59.99 - type: ndcg_at_5 value: 62.909000000000006 - type: precision_at_1 value: 72.505 - type: precision_at_10 value: 13.749 - type: precision_at_100 value: 1.619 - type: precision_at_1000 value: 0.179 - type: precision_at_3 value: 38.357 - type: precision_at_5 value: 25.313000000000002 - type: recall_at_1 value: 36.253 - type: recall_at_10 value: 68.744 - type: recall_at_100 value: 80.925 - type: recall_at_1000 value: 89.534 - type: recall_at_3 value: 57.535000000000004 - type: recall_at_5 value: 63.282000000000004 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 80.82239999999999 - type: ap value: 75.65895781725314 - type: f1 value: 80.75880969095746 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 21.624 - type: map_at_10 value: 34.075 - type: map_at_100 value: 35.229 - type: map_at_1000 value: 35.276999999999994 - type: map_at_3 value: 30.245 - type: map_at_5 value: 32.42 - type: mrr_at_1 value: 22.264 - type: mrr_at_10 value: 34.638000000000005 - type: mrr_at_100 value: 35.744 - type: mrr_at_1000 value: 35.787 - type: mrr_at_3 value: 30.891000000000002 - type: mrr_at_5 value: 33.042 - type: ndcg_at_1 value: 22.264 - type: ndcg_at_10 value: 40.991 - type: ndcg_at_100 value: 46.563 - type: ndcg_at_1000 value: 47.743 - type: ndcg_at_3 value: 33.198 - type: ndcg_at_5 value: 37.069 - type: precision_at_1 value: 22.264 - type: precision_at_10 value: 6.5089999999999995 - type: precision_at_100 value: 0.9299999999999999 - type: precision_at_1000 value: 0.10300000000000001 - type: precision_at_3 value: 14.216999999999999 - type: precision_at_5 value: 10.487 - type: recall_at_1 value: 21.624 - type: recall_at_10 value: 62.303 - type: recall_at_100 value: 88.124 - type: recall_at_1000 value: 97.08 - type: recall_at_3 value: 41.099999999999994 - type: recall_at_5 value: 50.381 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 91.06703146374831 - type: f1 value: 90.86867815863172 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (de) config: de split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 87.46970977740209 - type: f1 value: 86.36832872036588 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (es) config: es split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 89.26951300867245 - type: f1 value: 88.93561193959502 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (fr) config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 84.22799874725963 - type: f1 value: 84.30490069236556 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (hi) config: hi split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 86.02007888131948 - type: f1 value: 85.39376041027991 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (th) config: th split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 85.34900542495481 - type: f1 value: 85.39859673336713 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 71.078431372549 - type: f1 value: 53.45071102002276 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (de) config: de split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 65.85798816568047 - type: f1 value: 46.53112748993529 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (es) config: es split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 67.96864576384256 - type: f1 value: 45.966703022829506 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (fr) config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 61.31537738803633 - type: f1 value: 45.52601712835461 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (hi) config: hi split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 66.29616349946218 - type: f1 value: 47.24166485726613 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (th) config: th split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 67.51537070524412 - type: f1 value: 49.463476319014276 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (af) config: af split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.06792199058508 - type: f1 value: 54.094921857502285 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (am) config: am split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 51.960322797579025 - type: f1 value: 48.547371223370945 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ar) config: ar split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.425016812373904 - type: f1 value: 50.47069202054312 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (az) config: az split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.798251513113655 - type: f1 value: 57.05013069086648 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (bn) config: bn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.37794216543376 - type: f1 value: 56.3607992649805 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (cy) config: cy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 46.56018829858777 - type: f1 value: 43.87319715715134 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (da) config: da split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.9724277067922 - type: f1 value: 59.36480066245562 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (de) config: de split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.72696704774715 - type: f1 value: 59.143595966615855 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (el) config: el split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.5971755211836 - type: f1 value: 59.169445724946726 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 70.29589778076665 - type: f1 value: 67.7577001808977 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (es) config: es split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.31136516476126 - type: f1 value: 64.52032955983242 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fa) config: fa split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.54472091459314 - type: f1 value: 61.47903120066317 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fi) config: fi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.45595158036314 - type: f1 value: 58.0891846024637 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fr) config: fr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.47074646940149 - type: f1 value: 62.84830858877575 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (he) config: he split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.046402151983855 - type: f1 value: 55.269074430533195 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hi) config: hi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.06523201075991 - type: f1 value: 61.35339643021369 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hu) config: hu split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.954942837928726 - type: f1 value: 57.07035922704846 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (hy) config: hy split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.404169468728995 - type: f1 value: 53.94259011839138 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (id) config: id split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.16610625420309 - type: f1 value: 61.337103431499365 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (is) config: is split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 52.262945527908535 - type: f1 value: 49.7610691598921 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (it) config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.54472091459314 - type: f1 value: 63.469099018440154 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ja) config: ja split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.22797579018157 - type: f1 value: 64.89098471083001 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (jv) config: jv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 50.847343644922674 - type: f1 value: 47.8536963168393 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ka) config: ka split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 48.45326160053799 - type: f1 value: 46.370078045805556 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (km) config: km split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 42.83120376597175 - type: f1 value: 39.68948521599982 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (kn) config: kn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.5084061869536 - type: f1 value: 53.961876160401545 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ko) config: ko split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.7895090786819 - type: f1 value: 61.134223684676 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (lv) config: lv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 54.98991257565569 - type: f1 value: 52.579862862826296 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ml) config: ml split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.90316072629456 - type: f1 value: 58.203024538290336 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (mn) config: mn split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.09818426361802 - type: f1 value: 54.22718458445455 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ms) config: ms split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.991257565568255 - type: f1 value: 55.84892781767421 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (my) config: my split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 55.901143241425686 - type: f1 value: 52.25264332199797 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (nb) config: nb split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 61.96368527236047 - type: f1 value: 58.927243876153454 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (nl) config: nl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.64223268325489 - type: f1 value: 62.340453718379706 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (pl) config: pl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.52589105581708 - type: f1 value: 61.661113187022174 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (pt) config: pt split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 66.84599865501009 - type: f1 value: 64.59342572873005 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ro) config: ro split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 60.81035642232684 - type: f1 value: 57.5169089806797 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ru) config: ru split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.75991930060525 - type: f1 value: 62.89531115787938 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sl) config: sl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 56.51647612642906 - type: f1 value: 54.33154780100043 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sq) config: sq split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.985877605917956 - type: f1 value: 54.46187524463802 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sv) config: sv split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 65.03026227303296 - type: f1 value: 62.34377392877748 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (sw) config: sw split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 53.567585743106925 - type: f1 value: 50.73770655983206 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ta) config: ta split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.2595830531271 - type: f1 value: 53.657327291708626 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (te) config: te split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.82784129119032 - type: f1 value: 54.82518072665301 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (th) config: th split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.06859448554137 - type: f1 value: 63.00185280500495 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (tl) config: tl split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 58.91055817081371 - type: f1 value: 55.54116301224262 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (tr) config: tr split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 63.54404841963686 - type: f1 value: 59.57650946030184 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (ur) config: ur split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 59.27706792199059 - type: f1 value: 56.50010066083435 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (vi) config: vi split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 64.0719569603228 - type: f1 value: 61.817075925647956 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 68.23806321452591 - type: f1 value: 65.24917026029749 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-TW) config: zh-TW split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 62.53530598520511 - type: f1 value: 61.71131132295768 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (af) config: af split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.04303967720243 - type: f1 value: 60.3950085685985 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (am) config: am split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.83591123066578 - type: f1 value: 54.95059828830849 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ar) config: ar split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.62340282447881 - type: f1 value: 59.525159996498225 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (az) config: az split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.85406859448555 - type: f1 value: 59.129299095681276 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (bn) config: bn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.76731674512441 - type: f1 value: 61.159560612627715 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (cy) config: cy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 50.181573638197705 - type: f1 value: 46.98422176289957 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (da) config: da split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.92737054472092 - type: f1 value: 67.69135611952979 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (de) config: de split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.18964357767318 - type: f1 value: 68.46106138186214 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (el) config: el split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.0712844653665 - type: f1 value: 66.75545422473901 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.4754539340955 - type: f1 value: 74.38427146553252 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (es) config: es split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.82515131136518 - type: f1 value: 69.63516462173847 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fa) config: fa split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.70880968392737 - type: f1 value: 67.45420662567926 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fi) config: fi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 65.95494283792871 - type: f1 value: 65.06191009049222 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fr) config: fr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.75924680564896 - type: f1 value: 68.30833379585945 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (he) config: he split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 63.806321452589096 - type: f1 value: 63.273048243765054 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hi) config: hi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.68997982515133 - type: f1 value: 66.54703855381324 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hu) config: hu split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.46940147948891 - type: f1 value: 65.91017343463396 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (hy) config: hy split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.49899125756556 - type: f1 value: 57.90333469917769 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (id) config: id split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.9219905850706 - type: f1 value: 67.23169403762938 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (is) config: is split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.486213853396094 - type: f1 value: 54.85282355583758 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (it) config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.04169468728985 - type: f1 value: 68.83833333320462 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ja) config: ja split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 73.88702084734365 - type: f1 value: 74.04474735232299 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (jv) config: jv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.63416274377943 - type: f1 value: 55.11332211687954 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ka) config: ka split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 52.23604572965702 - type: f1 value: 50.86529813991055 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (km) config: km split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 46.62407531943511 - type: f1 value: 43.63485467164535 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (kn) config: kn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.15601882985878 - type: f1 value: 57.522837510959924 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ko) config: ko split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.84532616005382 - type: f1 value: 69.60021127179697 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (lv) config: lv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 56.65770006724949 - type: f1 value: 55.84219135523227 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ml) config: ml split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 66.53665097511768 - type: f1 value: 65.09087787792639 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (mn) config: mn split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.31405514458642 - type: f1 value: 58.06135303831491 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ms) config: ms split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.88231338264964 - type: f1 value: 62.751099407787926 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (my) config: my split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.86012104909213 - type: f1 value: 56.29118323058282 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (nb) config: nb split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.37390719569602 - type: f1 value: 66.27922244885102 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (nl) config: nl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.8675184936113 - type: f1 value: 70.22146529932019 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (pl) config: pl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.2212508406187 - type: f1 value: 67.77454802056282 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (pt) config: pt split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.18090114324143 - type: f1 value: 68.03737625431621 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ro) config: ro split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 64.65030262273034 - type: f1 value: 63.792945486912856 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ru) config: ru split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 69.48217888365838 - type: f1 value: 69.96028997292197 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sl) config: sl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.17821116341627 - type: f1 value: 59.3935969827171 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sq) config: sq split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.86146603900471 - type: f1 value: 60.133692735032376 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sv) config: sv split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.89441829186282 - type: f1 value: 70.03064076194089 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (sw) config: sw split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 58.15063887020847 - type: f1 value: 56.23326278499678 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ta) config: ta split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 59.43846671149966 - type: f1 value: 57.70440450281974 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (te) config: te split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.8507061197041 - type: f1 value: 59.22916396061171 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (th) config: th split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 70.65568258238063 - type: f1 value: 69.90736239440633 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (tl) config: tl split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 60.8843308675185 - type: f1 value: 59.30332663713599 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (tr) config: tr split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.05312710154674 - type: f1 value: 67.44024062594775 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (ur) config: ur split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.111634162743776 - type: f1 value: 60.89083013084519 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (vi) config: vi split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 67.44115669132482 - type: f1 value: 67.92227541674552 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.4687289845326 - type: f1 value: 74.16376793486025 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-TW) config: zh-TW split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 68.31876260928043 - type: f1 value: 68.5246745215607 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 30.90431696479766 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 27.259158476693774 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.28445330838555 - type: mrr value: 31.15758529581164 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.353 - type: map_at_10 value: 11.565 - type: map_at_100 value: 14.097000000000001 - type: map_at_1000 value: 15.354999999999999 - type: map_at_3 value: 8.749 - type: map_at_5 value: 9.974 - type: mrr_at_1 value: 42.105 - type: mrr_at_10 value: 50.589 - type: mrr_at_100 value: 51.187000000000005 - type: mrr_at_1000 value: 51.233 - type: mrr_at_3 value: 48.246 - type: mrr_at_5 value: 49.546 - type: ndcg_at_1 value: 40.402 - type: ndcg_at_10 value: 31.009999999999998 - type: ndcg_at_100 value: 28.026 - type: ndcg_at_1000 value: 36.905 - type: ndcg_at_3 value: 35.983 - type: ndcg_at_5 value: 33.764 - type: precision_at_1 value: 42.105 - type: precision_at_10 value: 22.786 - type: precision_at_100 value: 6.916 - type: precision_at_1000 value: 1.981 - type: precision_at_3 value: 33.333 - type: precision_at_5 value: 28.731 - type: recall_at_1 value: 5.353 - type: recall_at_10 value: 15.039 - type: recall_at_100 value: 27.348 - type: recall_at_1000 value: 59.453 - type: recall_at_3 value: 9.792 - type: recall_at_5 value: 11.882 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 33.852 - type: map_at_10 value: 48.924 - type: map_at_100 value: 49.854 - type: map_at_1000 value: 49.886 - type: map_at_3 value: 44.9 - type: map_at_5 value: 47.387 - type: mrr_at_1 value: 38.035999999999994 - type: mrr_at_10 value: 51.644 - type: mrr_at_100 value: 52.339 - type: mrr_at_1000 value: 52.35999999999999 - type: mrr_at_3 value: 48.421 - type: mrr_at_5 value: 50.468999999999994 - type: ndcg_at_1 value: 38.007000000000005 - type: ndcg_at_10 value: 56.293000000000006 - type: ndcg_at_100 value: 60.167 - type: ndcg_at_1000 value: 60.916000000000004 - type: ndcg_at_3 value: 48.903999999999996 - type: ndcg_at_5 value: 52.978 - type: precision_at_1 value: 38.007000000000005 - type: precision_at_10 value: 9.041 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 22.084 - type: precision_at_5 value: 15.608 - type: recall_at_1 value: 33.852 - type: recall_at_10 value: 75.893 - type: recall_at_100 value: 92.589 - type: recall_at_1000 value: 98.153 - type: recall_at_3 value: 56.969 - type: recall_at_5 value: 66.283 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 69.174 - type: map_at_10 value: 82.891 - type: map_at_100 value: 83.545 - type: map_at_1000 value: 83.56700000000001 - type: map_at_3 value: 79.944 - type: map_at_5 value: 81.812 - type: mrr_at_1 value: 79.67999999999999 - type: mrr_at_10 value: 86.279 - type: mrr_at_100 value: 86.39 - type: mrr_at_1000 value: 86.392 - type: mrr_at_3 value: 85.21 - type: mrr_at_5 value: 85.92999999999999 - type: ndcg_at_1 value: 79.69000000000001 - type: ndcg_at_10 value: 86.929 - type: ndcg_at_100 value: 88.266 - type: ndcg_at_1000 value: 88.428 - type: ndcg_at_3 value: 83.899 - type: ndcg_at_5 value: 85.56700000000001 - type: precision_at_1 value: 79.69000000000001 - type: precision_at_10 value: 13.161000000000001 - type: precision_at_100 value: 1.513 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 36.603 - type: precision_at_5 value: 24.138 - type: recall_at_1 value: 69.174 - type: recall_at_10 value: 94.529 - type: recall_at_100 value: 99.15 - type: recall_at_1000 value: 99.925 - type: recall_at_3 value: 85.86200000000001 - type: recall_at_5 value: 90.501 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 39.13064340585255 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 58.97884249325877 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 3.4680000000000004 - type: map_at_10 value: 7.865 - type: map_at_100 value: 9.332 - type: map_at_1000 value: 9.587 - type: map_at_3 value: 5.800000000000001 - type: map_at_5 value: 6.8790000000000004 - type: mrr_at_1 value: 17.0 - type: mrr_at_10 value: 25.629 - type: mrr_at_100 value: 26.806 - type: mrr_at_1000 value: 26.889000000000003 - type: mrr_at_3 value: 22.8 - type: mrr_at_5 value: 24.26 - type: ndcg_at_1 value: 17.0 - type: ndcg_at_10 value: 13.895 - type: ndcg_at_100 value: 20.491999999999997 - type: ndcg_at_1000 value: 25.759999999999998 - type: ndcg_at_3 value: 13.347999999999999 - type: ndcg_at_5 value: 11.61 - type: precision_at_1 value: 17.0 - type: precision_at_10 value: 7.090000000000001 - type: precision_at_100 value: 1.669 - type: precision_at_1000 value: 0.294 - type: precision_at_3 value: 12.3 - type: precision_at_5 value: 10.02 - type: recall_at_1 value: 3.4680000000000004 - type: recall_at_10 value: 14.363000000000001 - type: recall_at_100 value: 33.875 - type: recall_at_1000 value: 59.711999999999996 - type: recall_at_3 value: 7.483 - type: recall_at_5 value: 10.173 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.04084311714061 - type: cos_sim_spearman value: 77.51342467443078 - type: euclidean_pearson value: 80.0321166028479 - type: euclidean_spearman value: 77.29249114733226 - type: manhattan_pearson value: 80.03105964262431 - type: manhattan_spearman value: 77.22373689514794 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 84.1680158034387 - type: cos_sim_spearman value: 76.55983344071117 - type: euclidean_pearson value: 79.75266678300143 - type: euclidean_spearman value: 75.34516823467025 - type: manhattan_pearson value: 79.75959151517357 - type: manhattan_spearman value: 75.42330344141912 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 76.48898993209346 - type: cos_sim_spearman value: 76.96954120323366 - type: euclidean_pearson value: 76.94139109279668 - type: euclidean_spearman value: 76.85860283201711 - type: manhattan_pearson value: 76.6944095091912 - type: manhattan_spearman value: 76.61096912972553 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 77.85082366246944 - type: cos_sim_spearman value: 75.52053350101731 - type: euclidean_pearson value: 77.1165845070926 - type: euclidean_spearman value: 75.31216065884388 - type: manhattan_pearson value: 77.06193941833494 - type: manhattan_spearman value: 75.31003701700112 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.36305246526497 - type: cos_sim_spearman value: 87.11704613927415 - type: euclidean_pearson value: 86.04199125810939 - type: euclidean_spearman value: 86.51117572414263 - type: manhattan_pearson value: 86.0805106816633 - type: manhattan_spearman value: 86.52798366512229 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 82.18536255599724 - type: cos_sim_spearman value: 83.63377151025418 - type: euclidean_pearson value: 83.24657467993141 - type: euclidean_spearman value: 84.02751481993825 - type: manhattan_pearson value: 83.11941806582371 - type: manhattan_spearman value: 83.84251281019304 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (ko-ko) config: ko-ko split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 78.95816528475514 - type: cos_sim_spearman value: 78.86607380120462 - type: euclidean_pearson value: 78.51268699230545 - type: euclidean_spearman value: 79.11649316502229 - type: manhattan_pearson value: 78.32367302808157 - type: manhattan_spearman value: 78.90277699624637 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (ar-ar) config: ar-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.89126914997624 - type: cos_sim_spearman value: 73.0296921832678 - type: euclidean_pearson value: 71.50385903677738 - type: euclidean_spearman value: 73.13368899716289 - type: manhattan_pearson value: 71.47421463379519 - type: manhattan_spearman value: 73.03383242946575 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-ar) config: en-ar split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 59.22923684492637 - type: cos_sim_spearman value: 57.41013211368396 - type: euclidean_pearson value: 61.21107388080905 - type: euclidean_spearman value: 60.07620768697254 - type: manhattan_pearson value: 59.60157142786555 - type: manhattan_spearman value: 59.14069604103739 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-de) config: en-de split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 76.24345978774299 - type: cos_sim_spearman value: 77.24225743830719 - type: euclidean_pearson value: 76.66226095469165 - type: euclidean_spearman value: 77.60708820493146 - type: manhattan_pearson value: 76.05303324760429 - type: manhattan_spearman value: 76.96353149912348 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.50879160160852 - type: cos_sim_spearman value: 86.43594662965224 - type: euclidean_pearson value: 86.06846012826577 - type: euclidean_spearman value: 86.02041395794136 - type: manhattan_pearson value: 86.10916255616904 - type: manhattan_spearman value: 86.07346068198953 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-tr) config: en-tr split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 58.39803698977196 - type: cos_sim_spearman value: 55.96910950423142 - type: euclidean_pearson value: 58.17941175613059 - type: euclidean_spearman value: 55.03019330522745 - type: manhattan_pearson value: 57.333358138183286 - type: manhattan_spearman value: 54.04614023149965 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (es-en) config: es-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 70.98304089637197 - type: cos_sim_spearman value: 72.44071656215888 - type: euclidean_pearson value: 72.19224359033983 - type: euclidean_spearman value: 73.89871188913025 - type: manhattan_pearson value: 71.21098311547406 - type: manhattan_spearman value: 72.93405764824821 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (es-es) config: es-es split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 85.99792397466308 - type: cos_sim_spearman value: 84.83824377879495 - type: euclidean_pearson value: 85.70043288694438 - type: euclidean_spearman value: 84.70627558703686 - type: manhattan_pearson value: 85.89570850150801 - type: manhattan_spearman value: 84.95806105313007 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (fr-en) config: fr-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 72.21850322994712 - type: cos_sim_spearman value: 72.28669398117248 - type: euclidean_pearson value: 73.40082510412948 - type: euclidean_spearman value: 73.0326539281865 - type: manhattan_pearson value: 71.8659633964841 - type: manhattan_spearman value: 71.57817425823303 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (it-en) config: it-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 75.80921368595645 - type: cos_sim_spearman value: 77.33209091229315 - type: euclidean_pearson value: 76.53159540154829 - type: euclidean_spearman value: 78.17960842810093 - type: manhattan_pearson value: 76.13530186637601 - type: manhattan_spearman value: 78.00701437666875 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (nl-en) config: nl-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 74.74980608267349 - type: cos_sim_spearman value: 75.37597374318821 - type: euclidean_pearson value: 74.90506081911661 - type: euclidean_spearman value: 75.30151613124521 - type: manhattan_pearson value: 74.62642745918002 - type: manhattan_spearman value: 75.18619716592303 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 59.632662289205584 - type: cos_sim_spearman value: 60.938543391610914 - type: euclidean_pearson value: 62.113200529767056 - type: euclidean_spearman value: 61.410312633261164 - type: manhattan_pearson value: 61.75494698945686 - type: manhattan_spearman value: 60.92726195322362 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de) config: de split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 45.283470551557244 - type: cos_sim_spearman value: 53.44833015864201 - type: euclidean_pearson value: 41.17892011120893 - type: euclidean_spearman value: 53.81441383126767 - type: manhattan_pearson value: 41.17482200420659 - type: manhattan_spearman value: 53.82180269276363 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es) config: es split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 60.5069165306236 - type: cos_sim_spearman value: 66.87803259033826 - type: euclidean_pearson value: 63.5428979418236 - type: euclidean_spearman value: 66.9293576586897 - type: manhattan_pearson value: 63.59789526178922 - type: manhattan_spearman value: 66.86555009875066 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (pl) config: pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 28.23026196280264 - type: cos_sim_spearman value: 35.79397812652861 - type: euclidean_pearson value: 17.828102102767353 - type: euclidean_spearman value: 35.721501145568894 - type: manhattan_pearson value: 17.77134274219677 - type: manhattan_spearman value: 35.98107902846267 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (tr) config: tr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 56.51946541393812 - type: cos_sim_spearman value: 63.714686006214485 - type: euclidean_pearson value: 58.32104651305898 - type: euclidean_spearman value: 62.237110895702216 - type: manhattan_pearson value: 58.579416468759185 - type: manhattan_spearman value: 62.459738981727 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (ar) config: ar split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 48.76009839569795 - type: cos_sim_spearman value: 56.65188431953149 - type: euclidean_pearson value: 50.997682160915595 - type: euclidean_spearman value: 55.99910008818135 - type: manhattan_pearson value: 50.76220659606342 - type: manhattan_spearman value: 55.517347595391456 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (ru) config: ru split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 51.232731157702425 - type: cos_sim_spearman value: 59.89531877658345 - type: euclidean_pearson value: 49.937914570348376 - type: euclidean_spearman value: 60.220905659334036 - type: manhattan_pearson value: 50.00987996844193 - type: manhattan_spearman value: 60.081341480977926 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh) config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.717524559088005 - type: cos_sim_spearman value: 66.83570886252286 - type: euclidean_pearson value: 58.41338625505467 - type: euclidean_spearman value: 66.68991427704938 - type: manhattan_pearson value: 58.78638572916807 - type: manhattan_spearman value: 66.58684161046335 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (fr) config: fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 73.2962042954962 - type: cos_sim_spearman value: 76.58255504852025 - type: euclidean_pearson value: 75.70983192778257 - type: euclidean_spearman value: 77.4547684870542 - type: manhattan_pearson value: 75.75565853870485 - type: manhattan_spearman value: 76.90208974949428 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-en) config: de-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.47396266924846 - type: cos_sim_spearman value: 56.492267162048606 - type: euclidean_pearson value: 55.998505203070195 - type: euclidean_spearman value: 56.46447012960222 - type: manhattan_pearson value: 54.873172394430995 - type: manhattan_spearman value: 56.58111534551218 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es-en) config: es-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 69.87177267688686 - type: cos_sim_spearman value: 74.57160943395763 - type: euclidean_pearson value: 70.88330406826788 - type: euclidean_spearman value: 74.29767636038422 - type: manhattan_pearson value: 71.38245248369536 - type: manhattan_spearman value: 74.53102232732175 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (it) config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 72.80225656959544 - type: cos_sim_spearman value: 76.52646173725735 - type: euclidean_pearson value: 73.95710720200799 - type: euclidean_spearman value: 76.54040031984111 - type: manhattan_pearson value: 73.89679971946774 - type: manhattan_spearman value: 76.60886958161574 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (pl-en) config: pl-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 70.70844249898789 - type: cos_sim_spearman value: 72.68571783670241 - type: euclidean_pearson value: 72.38800772441031 - type: euclidean_spearman value: 72.86804422703312 - type: manhattan_pearson value: 71.29840508203515 - type: manhattan_spearman value: 71.86264441749513 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh-en) config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 58.647478923935694 - type: cos_sim_spearman value: 63.74453623540931 - type: euclidean_pearson value: 59.60138032437505 - type: euclidean_spearman value: 63.947930832166065 - type: manhattan_pearson value: 58.59735509491861 - type: manhattan_spearman value: 62.082503844627404 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (es-it) config: es-it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 65.8722516867162 - type: cos_sim_spearman value: 71.81208592523012 - type: euclidean_pearson value: 67.95315252165956 - type: euclidean_spearman value: 73.00749822046009 - type: manhattan_pearson value: 68.07884688638924 - type: manhattan_spearman value: 72.34210325803069 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-fr) config: de-fr split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 54.5405814240949 - type: cos_sim_spearman value: 60.56838649023775 - type: euclidean_pearson value: 53.011731611314104 - type: euclidean_spearman value: 58.533194841668426 - type: manhattan_pearson value: 53.623067729338494 - type: manhattan_spearman value: 58.018756154446926 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (de-pl) config: de-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 13.611046866216112 - type: cos_sim_spearman value: 28.238192909158492 - type: euclidean_pearson value: 22.16189199885129 - type: euclidean_spearman value: 35.012895679076564 - type: manhattan_pearson value: 21.969771178698387 - type: manhattan_spearman value: 32.456985088607475 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (fr-pl) config: fr-pl split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 74.58077407011655 - type: cos_sim_spearman value: 84.51542547285167 - type: euclidean_pearson value: 74.64613843596234 - type: euclidean_spearman value: 84.51542547285167 - type: manhattan_pearson value: 75.15335973101396 - type: manhattan_spearman value: 84.51542547285167 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 82.0739825531578 - type: cos_sim_spearman value: 84.01057479311115 - type: euclidean_pearson value: 83.85453227433344 - type: euclidean_spearman value: 84.01630226898655 - type: manhattan_pearson value: 83.75323603028978 - type: manhattan_spearman value: 83.89677983727685 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 78.12945623123957 - type: mrr value: 93.87738713719106 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 52.983000000000004 - type: map_at_10 value: 62.946000000000005 - type: map_at_100 value: 63.514 - type: map_at_1000 value: 63.554 - type: map_at_3 value: 60.183 - type: map_at_5 value: 61.672000000000004 - type: mrr_at_1 value: 55.667 - type: mrr_at_10 value: 64.522 - type: mrr_at_100 value: 64.957 - type: mrr_at_1000 value: 64.995 - type: mrr_at_3 value: 62.388999999999996 - type: mrr_at_5 value: 63.639 - type: ndcg_at_1 value: 55.667 - type: ndcg_at_10 value: 67.704 - type: ndcg_at_100 value: 70.299 - type: ndcg_at_1000 value: 71.241 - type: ndcg_at_3 value: 62.866 - type: ndcg_at_5 value: 65.16999999999999 - type: precision_at_1 value: 55.667 - type: precision_at_10 value: 9.033 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 24.444 - type: precision_at_5 value: 16.133 - type: recall_at_1 value: 52.983000000000004 - type: recall_at_10 value: 80.656 - type: recall_at_100 value: 92.5 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 67.744 - type: recall_at_5 value: 73.433 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.72772277227723 - type: cos_sim_ap value: 92.17845897992215 - type: cos_sim_f1 value: 85.9746835443038 - type: cos_sim_precision value: 87.07692307692308 - type: cos_sim_recall value: 84.89999999999999 - type: dot_accuracy value: 99.3039603960396 - type: dot_ap value: 60.70244020124878 - type: dot_f1 value: 59.92742353551063 - type: dot_precision value: 62.21743810548978 - type: dot_recall value: 57.8 - type: euclidean_accuracy value: 99.71683168316832 - type: euclidean_ap value: 91.53997039964659 - type: euclidean_f1 value: 84.88372093023257 - type: euclidean_precision value: 90.02242152466367 - type: euclidean_recall value: 80.30000000000001 - type: manhattan_accuracy value: 99.72376237623763 - type: manhattan_ap value: 91.80756777790289 - type: manhattan_f1 value: 85.48468106479157 - type: manhattan_precision value: 85.8728557013118 - type: manhattan_recall value: 85.1 - type: max_accuracy value: 99.72772277227723 - type: max_ap value: 92.17845897992215 - type: max_f1 value: 85.9746835443038 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 53.52464042600003 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 32.071631948736 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.19552407604654 - type: mrr value: 49.95269130379425 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.345293033095427 - type: cos_sim_spearman value: 29.976931423258403 - type: dot_pearson value: 27.047078008958408 - type: dot_spearman value: 27.75894368380218 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 1.706 - type: map_at_100 value: 9.634 - type: map_at_1000 value: 23.665 - type: map_at_3 value: 0.5950000000000001 - type: map_at_5 value: 0.95 - type: mrr_at_1 value: 86.0 - type: mrr_at_10 value: 91.8 - type: mrr_at_100 value: 91.8 - type: mrr_at_1000 value: 91.8 - type: mrr_at_3 value: 91.0 - type: mrr_at_5 value: 91.8 - type: ndcg_at_1 value: 80.0 - type: ndcg_at_10 value: 72.573 - type: ndcg_at_100 value: 53.954 - type: ndcg_at_1000 value: 47.760999999999996 - type: ndcg_at_3 value: 76.173 - type: ndcg_at_5 value: 75.264 - type: precision_at_1 value: 86.0 - type: precision_at_10 value: 76.4 - type: precision_at_100 value: 55.50000000000001 - type: precision_at_1000 value: 21.802 - type: precision_at_3 value: 81.333 - type: precision_at_5 value: 80.4 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 1.925 - type: recall_at_100 value: 12.762 - type: recall_at_1000 value: 44.946000000000005 - type: recall_at_3 value: 0.634 - type: recall_at_5 value: 1.051 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (sqi-eng) config: sqi-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.0 - type: f1 value: 88.55666666666666 - type: precision value: 87.46166666666667 - type: recall value: 91.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fry-eng) config: fry-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.22543352601156 - type: f1 value: 51.03220478943021 - type: precision value: 48.8150289017341 - type: recall value: 57.22543352601156 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kur-eng) config: kur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 46.58536585365854 - type: f1 value: 39.66870798578116 - type: precision value: 37.416085946573745 - type: recall value: 46.58536585365854 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tur-eng) config: tur-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.7 - type: f1 value: 86.77999999999999 - type: precision value: 85.45333333333332 - type: recall value: 89.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (deu-eng) config: deu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 97.39999999999999 - type: f1 value: 96.58333333333331 - type: precision value: 96.2 - type: recall value: 97.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nld-eng) config: nld-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.4 - type: f1 value: 90.3 - type: precision value: 89.31666666666668 - type: recall value: 92.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ron-eng) config: ron-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.9 - type: f1 value: 83.67190476190476 - type: precision value: 82.23333333333332 - type: recall value: 86.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ang-eng) config: ang-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 50.0 - type: f1 value: 42.23229092632078 - type: precision value: 39.851634683724235 - type: recall value: 50.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ido-eng) config: ido-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.3 - type: f1 value: 70.86190476190477 - type: precision value: 68.68777777777777 - type: recall value: 76.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (jav-eng) config: jav-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 57.073170731707314 - type: f1 value: 50.658958927251604 - type: precision value: 48.26480836236933 - type: recall value: 57.073170731707314 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (isl-eng) config: isl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 68.2 - type: f1 value: 62.156507936507936 - type: precision value: 59.84964285714286 - type: recall value: 68.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (slv-eng) config: slv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.52126366950182 - type: f1 value: 72.8496210148701 - type: precision value: 70.92171498003819 - type: recall value: 77.52126366950182 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cym-eng) config: cym-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 70.78260869565217 - type: f1 value: 65.32422360248447 - type: precision value: 63.063067367415194 - type: recall value: 70.78260869565217 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kaz-eng) config: kaz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 78.43478260869566 - type: f1 value: 73.02608695652172 - type: precision value: 70.63768115942028 - type: recall value: 78.43478260869566 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (est-eng) config: est-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.9 - type: f1 value: 55.309753694581275 - type: precision value: 53.130476190476195 - type: recall value: 60.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (heb-eng) config: heb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 72.89999999999999 - type: f1 value: 67.92023809523809 - type: precision value: 65.82595238095237 - type: recall value: 72.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gla-eng) config: gla-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 46.80337756332931 - type: f1 value: 39.42174900558496 - type: precision value: 36.97101116280851 - type: recall value: 46.80337756332931 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mar-eng) config: mar-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.8 - type: f1 value: 86.79 - type: precision value: 85.375 - type: recall value: 89.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lat-eng) config: lat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.199999999999996 - type: f1 value: 39.95484348984349 - type: precision value: 37.561071428571424 - type: recall value: 47.199999999999996 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bel-eng) config: bel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.8 - type: f1 value: 84.68190476190475 - type: precision value: 83.275 - type: recall value: 87.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pms-eng) config: pms-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.76190476190476 - type: f1 value: 42.14965986394558 - type: precision value: 39.96743626743626 - type: recall value: 48.76190476190476 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gle-eng) config: gle-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.10000000000001 - type: f1 value: 59.58580086580086 - type: precision value: 57.150238095238095 - type: recall value: 66.10000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pes-eng) config: pes-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.3 - type: f1 value: 84.0 - type: precision value: 82.48666666666666 - type: recall value: 87.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nob-eng) config: nob-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.4 - type: f1 value: 87.79523809523809 - type: precision value: 86.6 - type: recall value: 90.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bul-eng) config: bul-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.0 - type: f1 value: 83.81 - type: precision value: 82.36666666666666 - type: recall value: 87.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cbk-eng) config: cbk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.9 - type: f1 value: 57.76533189033189 - type: precision value: 55.50595238095239 - type: recall value: 63.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hun-eng) config: hun-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 76.1 - type: f1 value: 71.83690476190478 - type: precision value: 70.04928571428573 - type: recall value: 76.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (uig-eng) config: uig-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.3 - type: f1 value: 59.32626984126984 - type: precision value: 56.62535714285713 - type: recall value: 66.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (rus-eng) config: rus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.60000000000001 - type: f1 value: 87.96333333333334 - type: precision value: 86.73333333333333 - type: recall value: 90.60000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (spa-eng) config: spa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 93.10000000000001 - type: f1 value: 91.10000000000001 - type: precision value: 90.16666666666666 - type: recall value: 93.10000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hye-eng) config: hye-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.71428571428571 - type: f1 value: 82.29142600436403 - type: precision value: 80.8076626877166 - type: recall value: 85.71428571428571 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tel-eng) config: tel-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.88888888888889 - type: f1 value: 85.7834757834758 - type: precision value: 84.43732193732193 - type: recall value: 88.88888888888889 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (afr-eng) config: afr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.5 - type: f1 value: 85.67190476190476 - type: precision value: 84.43333333333332 - type: recall value: 88.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mon-eng) config: mon-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.72727272727273 - type: f1 value: 78.21969696969695 - type: precision value: 76.18181818181819 - type: recall value: 82.72727272727273 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (arz-eng) config: arz-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 61.0062893081761 - type: f1 value: 55.13976240391334 - type: precision value: 52.92112499659669 - type: recall value: 61.0062893081761 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hrv-eng) config: hrv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 89.5 - type: f1 value: 86.86666666666666 - type: precision value: 85.69166666666668 - type: recall value: 89.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nov-eng) config: nov-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 73.54085603112841 - type: f1 value: 68.56031128404669 - type: precision value: 66.53047989623866 - type: recall value: 73.54085603112841 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (gsw-eng) config: gsw-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 43.58974358974359 - type: f1 value: 36.45299145299145 - type: precision value: 33.81155881155882 - type: recall value: 43.58974358974359 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nds-eng) config: nds-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 59.599999999999994 - type: f1 value: 53.264689754689755 - type: precision value: 50.869166666666665 - type: recall value: 59.599999999999994 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ukr-eng) config: ukr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.2 - type: f1 value: 81.61666666666665 - type: precision value: 80.02833333333335 - type: recall value: 85.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (uzb-eng) config: uzb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 63.78504672897196 - type: f1 value: 58.00029669188548 - type: precision value: 55.815809968847354 - type: recall value: 63.78504672897196 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lit-eng) config: lit-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 66.5 - type: f1 value: 61.518333333333345 - type: precision value: 59.622363699102834 - type: recall value: 66.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ina-eng) config: ina-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.6 - type: f1 value: 85.60222222222221 - type: precision value: 84.27916666666665 - type: recall value: 88.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lfn-eng) config: lfn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 58.699999999999996 - type: f1 value: 52.732375957375965 - type: precision value: 50.63214035964035 - type: recall value: 58.699999999999996 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (zsm-eng) config: zsm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.10000000000001 - type: f1 value: 89.99666666666667 - type: precision value: 89.03333333333333 - type: recall value: 92.10000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ita-eng) config: ita-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.10000000000001 - type: f1 value: 87.55666666666667 - type: precision value: 86.36166666666668 - type: recall value: 90.10000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cmn-eng) config: cmn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.4 - type: f1 value: 88.89000000000001 - type: precision value: 87.71166666666666 - type: recall value: 91.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (lvs-eng) config: lvs-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.7 - type: f1 value: 60.67427750410509 - type: precision value: 58.71785714285714 - type: recall value: 65.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (glg-eng) config: glg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.39999999999999 - type: f1 value: 81.93190476190475 - type: precision value: 80.37833333333333 - type: recall value: 85.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ceb-eng) config: ceb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 47.833333333333336 - type: f1 value: 42.006625781625786 - type: precision value: 40.077380952380956 - type: recall value: 47.833333333333336 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bre-eng) config: bre-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 10.4 - type: f1 value: 8.24465007215007 - type: precision value: 7.664597069597071 - type: recall value: 10.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ben-eng) config: ben-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.6 - type: f1 value: 77.76333333333334 - type: precision value: 75.57833333333332 - type: recall value: 82.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swg-eng) config: swg-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 52.67857142857143 - type: f1 value: 44.302721088435376 - type: precision value: 41.49801587301587 - type: recall value: 52.67857142857143 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (arq-eng) config: arq-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 28.3205268935236 - type: f1 value: 22.426666605171157 - type: precision value: 20.685900116470915 - type: recall value: 28.3205268935236 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kab-eng) config: kab-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 22.7 - type: f1 value: 17.833970473970474 - type: precision value: 16.407335164835164 - type: recall value: 22.7 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fra-eng) config: fra-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.2 - type: f1 value: 89.92999999999999 - type: precision value: 88.87 - type: recall value: 92.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (por-eng) config: por-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.4 - type: f1 value: 89.25 - type: precision value: 88.21666666666667 - type: recall value: 91.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tat-eng) config: tat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 69.19999999999999 - type: f1 value: 63.38269841269841 - type: precision value: 61.14773809523809 - type: recall value: 69.19999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (oci-eng) config: oci-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 48.8 - type: f1 value: 42.839915639915645 - type: precision value: 40.770287114845935 - type: recall value: 48.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pol-eng) config: pol-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.8 - type: f1 value: 85.90666666666668 - type: precision value: 84.54166666666666 - type: recall value: 88.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (war-eng) config: war-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 46.6 - type: f1 value: 40.85892920804686 - type: precision value: 38.838223114604695 - type: recall value: 46.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (aze-eng) config: aze-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.0 - type: f1 value: 80.14190476190475 - type: precision value: 78.45333333333333 - type: recall value: 84.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (vie-eng) config: vie-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.5 - type: f1 value: 87.78333333333333 - type: precision value: 86.5 - type: recall value: 90.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (nno-eng) config: nno-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.5 - type: f1 value: 69.48397546897547 - type: precision value: 67.51869047619049 - type: recall value: 74.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cha-eng) config: cha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 32.846715328467155 - type: f1 value: 27.828177499710343 - type: precision value: 26.63451511991658 - type: recall value: 32.846715328467155 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mhr-eng) config: mhr-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.0 - type: f1 value: 6.07664116764988 - type: precision value: 5.544177607179943 - type: recall value: 8.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dan-eng) config: dan-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.6 - type: f1 value: 84.38555555555554 - type: precision value: 82.91583333333334 - type: recall value: 87.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ell-eng) config: ell-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 87.5 - type: f1 value: 84.08333333333331 - type: precision value: 82.47333333333333 - type: recall value: 87.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (amh-eng) config: amh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.95238095238095 - type: f1 value: 76.13095238095238 - type: precision value: 74.05753968253967 - type: recall value: 80.95238095238095 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (pam-eng) config: pam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.799999999999999 - type: f1 value: 6.971422975172975 - type: precision value: 6.557814916172301 - type: recall value: 8.799999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hsb-eng) config: hsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 44.099378881987576 - type: f1 value: 37.01649742022413 - type: precision value: 34.69420618488942 - type: recall value: 44.099378881987576 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (srp-eng) config: srp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 84.3 - type: f1 value: 80.32666666666667 - type: precision value: 78.60666666666665 - type: recall value: 84.3 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (epo-eng) config: epo-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 92.5 - type: f1 value: 90.49666666666666 - type: precision value: 89.56666666666668 - type: recall value: 92.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kzj-eng) config: kzj-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 10.0 - type: f1 value: 8.268423529875141 - type: precision value: 7.878118605532398 - type: recall value: 10.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (awa-eng) config: awa-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 79.22077922077922 - type: f1 value: 74.27128427128426 - type: precision value: 72.28715728715729 - type: recall value: 79.22077922077922 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fao-eng) config: fao-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.64885496183206 - type: f1 value: 58.87495456197747 - type: precision value: 55.992366412213734 - type: recall value: 65.64885496183206 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mal-eng) config: mal-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 96.06986899563319 - type: f1 value: 94.78408539543909 - type: precision value: 94.15332362930616 - type: recall value: 96.06986899563319 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ile-eng) config: ile-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.2 - type: f1 value: 71.72571428571428 - type: precision value: 69.41000000000001 - type: recall value: 77.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (bos-eng) config: bos-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.4406779661017 - type: f1 value: 83.2391713747646 - type: precision value: 81.74199623352166 - type: recall value: 86.4406779661017 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cor-eng) config: cor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 8.4 - type: f1 value: 6.017828743398003 - type: precision value: 5.4829865484756795 - type: recall value: 8.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (cat-eng) config: cat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.5 - type: f1 value: 79.74833333333333 - type: precision value: 78.04837662337664 - type: recall value: 83.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (eus-eng) config: eus-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 60.4 - type: f1 value: 54.467301587301584 - type: precision value: 52.23242424242424 - type: recall value: 60.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (yue-eng) config: yue-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.9 - type: f1 value: 69.68699134199134 - type: precision value: 67.59873015873016 - type: recall value: 74.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swe-eng) config: swe-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.0 - type: f1 value: 84.9652380952381 - type: precision value: 83.66166666666666 - type: recall value: 88.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dtp-eng) config: dtp-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 9.1 - type: f1 value: 7.681244588744588 - type: precision value: 7.370043290043291 - type: recall value: 9.1 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kat-eng) config: kat-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 80.9651474530831 - type: f1 value: 76.84220605132133 - type: precision value: 75.19606398962966 - type: recall value: 80.9651474530831 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (jpn-eng) config: jpn-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 86.9 - type: f1 value: 83.705 - type: precision value: 82.3120634920635 - type: recall value: 86.9 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (csb-eng) config: csb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 29.64426877470356 - type: f1 value: 23.98763072676116 - type: precision value: 22.506399397703746 - type: recall value: 29.64426877470356 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (xho-eng) config: xho-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 70.4225352112676 - type: f1 value: 62.84037558685445 - type: precision value: 59.56572769953053 - type: recall value: 70.4225352112676 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (orv-eng) config: orv-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 19.64071856287425 - type: f1 value: 15.125271011207756 - type: precision value: 13.865019261197494 - type: recall value: 19.64071856287425 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ind-eng) config: ind-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 90.2 - type: f1 value: 87.80666666666666 - type: precision value: 86.70833333333331 - type: recall value: 90.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tuk-eng) config: tuk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 23.15270935960591 - type: f1 value: 18.407224958949097 - type: precision value: 16.982385430661292 - type: recall value: 23.15270935960591 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (max-eng) config: max-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 55.98591549295775 - type: f1 value: 49.94718309859154 - type: precision value: 47.77864154624717 - type: recall value: 55.98591549295775 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (swh-eng) config: swh-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 73.07692307692307 - type: f1 value: 66.74358974358974 - type: precision value: 64.06837606837607 - type: recall value: 73.07692307692307 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (hin-eng) config: hin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 94.89999999999999 - type: f1 value: 93.25 - type: precision value: 92.43333333333332 - type: recall value: 94.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (dsb-eng) config: dsb-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 37.78705636743215 - type: f1 value: 31.63899658680452 - type: precision value: 29.72264397629742 - type: recall value: 37.78705636743215 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ber-eng) config: ber-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 21.6 - type: f1 value: 16.91697302697303 - type: precision value: 15.71225147075147 - type: recall value: 21.6 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tam-eng) config: tam-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 85.01628664495115 - type: f1 value: 81.38514037536838 - type: precision value: 79.83170466883823 - type: recall value: 85.01628664495115 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (slk-eng) config: slk-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.39999999999999 - type: f1 value: 79.96380952380952 - type: precision value: 78.48333333333333 - type: recall value: 83.39999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tgl-eng) config: tgl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 83.2 - type: f1 value: 79.26190476190476 - type: precision value: 77.58833333333334 - type: recall value: 83.2 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ast-eng) config: ast-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 75.59055118110236 - type: f1 value: 71.66854143232096 - type: precision value: 70.30183727034121 - type: recall value: 75.59055118110236 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (mkd-eng) config: mkd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 65.5 - type: f1 value: 59.26095238095238 - type: precision value: 56.81909090909092 - type: recall value: 65.5 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (khm-eng) config: khm-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 55.26315789473685 - type: f1 value: 47.986523325858506 - type: precision value: 45.33950006595436 - type: recall value: 55.26315789473685 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ces-eng) config: ces-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 82.89999999999999 - type: f1 value: 78.835 - type: precision value: 77.04761904761905 - type: recall value: 82.89999999999999 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tzl-eng) config: tzl-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 43.269230769230774 - type: f1 value: 36.20421245421245 - type: precision value: 33.57371794871795 - type: recall value: 43.269230769230774 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (urd-eng) config: urd-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 88.0 - type: f1 value: 84.70666666666666 - type: precision value: 83.23166666666665 - type: recall value: 88.0 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (ara-eng) config: ara-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 77.4 - type: f1 value: 72.54666666666667 - type: precision value: 70.54318181818181 - type: recall value: 77.4 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (kor-eng) config: kor-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 78.60000000000001 - type: f1 value: 74.1588888888889 - type: precision value: 72.30250000000001 - type: recall value: 78.60000000000001 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (yid-eng) config: yid-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 72.40566037735849 - type: f1 value: 66.82587328813744 - type: precision value: 64.75039308176099 - type: recall value: 72.40566037735849 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (fin-eng) config: fin-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 73.8 - type: f1 value: 68.56357142857144 - type: precision value: 66.3178822055138 - type: recall value: 73.8 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (tha-eng) config: tha-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 91.78832116788321 - type: f1 value: 89.3552311435523 - type: precision value: 88.20559610705597 - type: recall value: 91.78832116788321 - task: type: BitextMining dataset: type: mteb/tatoeba-bitext-mining name: MTEB Tatoeba (wuu-eng) config: wuu-eng split: test revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553 metrics: - type: accuracy value: 74.3 - type: f1 value: 69.05085581085581 - type: precision value: 66.955 - type: recall value: 74.3 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.896 - type: map_at_10 value: 8.993 - type: map_at_100 value: 14.133999999999999 - type: map_at_1000 value: 15.668000000000001 - type: map_at_3 value: 5.862 - type: map_at_5 value: 7.17 - type: mrr_at_1 value: 34.694 - type: mrr_at_10 value: 42.931000000000004 - type: mrr_at_100 value: 44.81 - type: mrr_at_1000 value: 44.81 - type: mrr_at_3 value: 38.435 - type: mrr_at_5 value: 41.701 - type: ndcg_at_1 value: 31.633 - type: ndcg_at_10 value: 21.163 - type: ndcg_at_100 value: 33.306000000000004 - type: ndcg_at_1000 value: 45.275999999999996 - type: ndcg_at_3 value: 25.685999999999996 - type: ndcg_at_5 value: 23.732 - type: precision_at_1 value: 34.694 - type: precision_at_10 value: 17.755000000000003 - type: precision_at_100 value: 6.938999999999999 - type: precision_at_1000 value: 1.48 - type: precision_at_3 value: 25.85 - type: precision_at_5 value: 23.265 - type: recall_at_1 value: 2.896 - type: recall_at_10 value: 13.333999999999998 - type: recall_at_100 value: 43.517 - type: recall_at_1000 value: 79.836 - type: recall_at_3 value: 6.306000000000001 - type: recall_at_5 value: 8.825 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 69.3874 - type: ap value: 13.829909072469423 - type: f1 value: 53.54534203543492 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 62.62026032823995 - type: f1 value: 62.85251350485221 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 33.21527881409797 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.97943613280086 - type: cos_sim_ap value: 70.75454316885921 - type: cos_sim_f1 value: 65.38274012676743 - type: cos_sim_precision value: 60.761214318078835 - type: cos_sim_recall value: 70.76517150395777 - type: dot_accuracy value: 79.0546581629612 - type: dot_ap value: 47.3197121792147 - type: dot_f1 value: 49.20106524633821 - type: dot_precision value: 42.45499808502489 - type: dot_recall value: 58.49604221635884 - type: euclidean_accuracy value: 85.08076533349228 - type: euclidean_ap value: 70.95016106374474 - type: euclidean_f1 value: 65.43987900176455 - type: euclidean_precision value: 62.64478764478765 - type: euclidean_recall value: 68.49604221635884 - type: manhattan_accuracy value: 84.93771234428085 - type: manhattan_ap value: 70.63668388755362 - type: manhattan_f1 value: 65.23895401262398 - type: manhattan_precision value: 56.946084218811485 - type: manhattan_recall value: 76.35883905013192 - type: max_accuracy value: 85.08076533349228 - type: max_ap value: 70.95016106374474 - type: max_f1 value: 65.43987900176455 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.69096130709822 - type: cos_sim_ap value: 84.82526278228542 - type: cos_sim_f1 value: 77.65485060585536 - type: cos_sim_precision value: 75.94582658619167 - type: cos_sim_recall value: 79.44256236526024 - type: dot_accuracy value: 80.97954748321496 - type: dot_ap value: 64.81642914145866 - type: dot_f1 value: 60.631996987229975 - type: dot_precision value: 54.5897293631712 - type: dot_recall value: 68.17831844779796 - type: euclidean_accuracy value: 88.6987231730508 - type: euclidean_ap value: 84.80003825477253 - type: euclidean_f1 value: 77.67194179854496 - type: euclidean_precision value: 75.7128235122094 - type: euclidean_recall value: 79.73514012935017 - type: manhattan_accuracy value: 88.62692591298949 - type: manhattan_ap value: 84.80451408255276 - type: manhattan_f1 value: 77.69888949572183 - type: manhattan_precision value: 73.70311528631622 - type: manhattan_recall value: 82.15275639051433 - type: max_accuracy value: 88.6987231730508 - type: max_ap value: 84.82526278228542 - type: max_f1 value: 77.69888949572183 language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: mit --- ### Optimized and quantized of the original model Optimization format: `ONNX` Quantization: `int8` Original model is available at [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small)
ihughes15234/llama_3_1_8bi_tictactoe_dpo5epoch_v3
ihughes15234
2024-11-13T12:16:08Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:ihughes15234/llama_3_1_8bi_tictactoe_dpo3epoch_v3", "base_model:finetune:ihughes15234/llama_3_1_8bi_tictactoe_dpo3epoch_v3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-11-13T12:10:18Z
--- base_model: ihughes15234/llama_3_1_8bi_tictactoe_dpo3epoch_v3 tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ihughes15234 - **License:** apache-2.0 - **Finetuned from model :** ihughes15234/llama_3_1_8bi_tictactoe_dpo3epoch_v3 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
bibibobo777/Hw4_model
bibibobo777
2024-11-13T12:11:26Z
9
0
null
[ "tensorboard", "safetensors", "bert", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "region:us" ]
null
2024-11-12T04:10:32Z
--- license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - generated_from_trainer model-index: - name: Hw4_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Hw4_model This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 471 | 0.3513 | | 0.4173 | 2.0 | 942 | 0.3171 | | 0.3049 | 3.0 | 1413 | 0.3143 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0 - Datasets 2.21.0 - Tokenizers 0.19.1
lyhourt/distilbert-finetuned-emotion
lyhourt
2024-11-13T12:10:10Z
117
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-11T05:08:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aalof/seq2seq_imlla
aalof
2024-11-13T12:06:47Z
49
0
transformers
[ "transformers", "tensorboard", "safetensors", "custom_seq2seq", "generated_from_trainer", "dataset:iva_mt_wslot", "endpoints_compatible", "region:us" ]
null
2024-11-13T12:06:37Z
--- library_name: transformers tags: - generated_from_trainer datasets: - iva_mt_wslot metrics: - bleu model-index: - name: seq2seq_imlla results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # seq2seq_imlla This model is a fine-tuned version of [](https://huggingface.co/) on the iva_mt_wslot dataset. It achieves the following results on the evaluation set: - Loss: 6.0658 - Bleu: 0.0042 - Gen Len: 5.8248 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:------:|:----:|:---------------:|:------:|:-------:| | 7.441 | 0.9992 | 636 | 7.0735 | 0.0 | 5.3206 | | 6.5312 | 2.0 | 1273 | 6.4544 | 0.0175 | 9.5238 | | 6.0704 | 2.9992 | 1909 | 6.2110 | 0.0007 | 4.9967 | | 5.8907 | 4.0 | 2546 | 6.1000 | 0.0055 | 6.944 | | 5.7606 | 4.9961 | 3180 | 6.0658 | 0.0042 | 5.8248 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
sweetpapa/test1
sweetpapa
2024-11-13T11:49:43Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-13T11:45:33Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/internlm2-chat-20b-sft-GGUF
mradermacher
2024-11-13T11:48:59Z
6
0
transformers
[ "transformers", "gguf", "en", "base_model:internlm/internlm2-chat-20b-sft", "base_model:quantized:internlm/internlm2-chat-20b-sft", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-12T07:22:10Z
--- base_model: internlm/internlm2-chat-20b-sft language: - en library_name: transformers license: other quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/internlm/internlm2-chat-20b-sft <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/internlm2-chat-20b-sft-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/internlm2-chat-20b-sft-GGUF/resolve/main/internlm2-chat-20b-sft.Q2_K.gguf) | Q2_K | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-chat-20b-sft-GGUF/resolve/main/internlm2-chat-20b-sft.Q3_K_S.gguf) | Q3_K_S | 8.9 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-chat-20b-sft-GGUF/resolve/main/internlm2-chat-20b-sft.Q3_K_M.gguf) | Q3_K_M | 9.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/internlm2-chat-20b-sft-GGUF/resolve/main/internlm2-chat-20b-sft.Q3_K_L.gguf) | Q3_K_L | 10.7 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-chat-20b-sft-GGUF/resolve/main/internlm2-chat-20b-sft.IQ4_XS.gguf) | IQ4_XS | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-chat-20b-sft-GGUF/resolve/main/internlm2-chat-20b-sft.Q4_K_S.gguf) | Q4_K_S | 11.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/internlm2-chat-20b-sft-GGUF/resolve/main/internlm2-chat-20b-sft.Q4_K_M.gguf) | Q4_K_M | 12.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/internlm2-chat-20b-sft-GGUF/resolve/main/internlm2-chat-20b-sft.Q5_K_S.gguf) | Q5_K_S | 13.8 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-chat-20b-sft-GGUF/resolve/main/internlm2-chat-20b-sft.Q5_K_M.gguf) | Q5_K_M | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/internlm2-chat-20b-sft-GGUF/resolve/main/internlm2-chat-20b-sft.Q6_K.gguf) | Q6_K | 16.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/internlm2-chat-20b-sft-GGUF/resolve/main/internlm2-chat-20b-sft.Q8_0.gguf) | Q8_0 | 21.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/Kaspar_-_QueerGPT2-gguf
RichardErkhov
2024-11-13T11:48:59Z
57
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-11-13T11:37:53Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) QueerGPT2 - GGUF - Model creator: https://huggingface.co/Kaspar/ - Original model: https://huggingface.co/Kaspar/QueerGPT2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [QueerGPT2.Q2_K.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q2_K.gguf) | Q2_K | 0.08GB | | [QueerGPT2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q3_K_S.gguf) | Q3_K_S | 0.08GB | | [QueerGPT2.Q3_K.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q3_K.gguf) | Q3_K | 0.09GB | | [QueerGPT2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q3_K_M.gguf) | Q3_K_M | 0.09GB | | [QueerGPT2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q3_K_L.gguf) | Q3_K_L | 0.1GB | | [QueerGPT2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.IQ4_XS.gguf) | IQ4_XS | 0.1GB | | [QueerGPT2.Q4_0.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q4_0.gguf) | Q4_0 | 0.1GB | | [QueerGPT2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.IQ4_NL.gguf) | IQ4_NL | 0.1GB | | [QueerGPT2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q4_K_S.gguf) | Q4_K_S | 0.1GB | | [QueerGPT2.Q4_K.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q4_K.gguf) | Q4_K | 0.11GB | | [QueerGPT2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q4_K_M.gguf) | Q4_K_M | 0.11GB | | [QueerGPT2.Q4_1.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q4_1.gguf) | Q4_1 | 0.11GB | | [QueerGPT2.Q5_0.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q5_0.gguf) | Q5_0 | 0.11GB | | [QueerGPT2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q5_K_S.gguf) | Q5_K_S | 0.11GB | | [QueerGPT2.Q5_K.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q5_K.gguf) | Q5_K | 0.12GB | | [QueerGPT2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q5_K_M.gguf) | Q5_K_M | 0.12GB | | [QueerGPT2.Q5_1.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q5_1.gguf) | Q5_1 | 0.12GB | | [QueerGPT2.Q6_K.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q6_K.gguf) | Q6_K | 0.13GB | | [QueerGPT2.Q8_0.gguf](https://huggingface.co/RichardErkhov/Kaspar_-_QueerGPT2-gguf/blob/main/QueerGPT2.Q8_0.gguf) | Q8_0 | 0.17GB | Original model description: --- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: QueerGPT2 results: [] widget: - text: "When I grow up, I want to be a" --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # QueerGPT2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.3634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.5433 | 1.0 | 13237 | 4.3634 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
mradermacher/bigstral-12b-32k-8xMoE-GGUF
mradermacher
2024-11-13T11:36:50Z
8
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:bartowski/bigstral-12b-32k-8xMoE", "base_model:quantized:bartowski/bigstral-12b-32k-8xMoE", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-13T07:24:38Z
--- base_model: bartowski/bigstral-12b-32k-8xMoE language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/bartowski/bigstral-12b-32k-8xMoE <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q2_K.gguf) | Q2_K | 30.3 | | | [GGUF](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q3_K_S.gguf) | Q3_K_S | 35.7 | | | [GGUF](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q3_K_M.gguf) | Q3_K_M | 39.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q3_K_L.gguf) | Q3_K_L | 42.3 | | | [GGUF](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.IQ4_XS.gguf) | IQ4_XS | 44.4 | | | [GGUF](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q4_K_S.gguf) | Q4_K_S | 46.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q4_K_M.gguf) | Q4_K_M | 49.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q5_K_S.gguf.part2of2) | Q5_K_S | 56.4 | | | [PART 1](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q5_K_M.gguf.part2of2) | Q5_K_M | 58.1 | | | [PART 1](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q6_K.gguf.part2of2) | Q6_K | 67.1 | very good quality | | [PART 1](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/bigstral-12b-32k-8xMoE-GGUF/resolve/main/bigstral-12b-32k-8xMoE.Q8_0.gguf.part2of2) | Q8_0 | 86.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Rohan-G/bnb_nf4_quantization
Rohan-G
2024-11-13T11:26:30Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-11-13T11:21:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vijay-ravichander/Llama-1B-Summarization-LoRA-MLP-r64-merged
vijay-ravichander
2024-11-13T11:22:24Z
84
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-13T11:19:33Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Pandistellina/merged-model-sentiment-llama3
Pandistellina
2024-11-13T11:18:57Z
119
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-13T11:13:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
haris-waqar/TrimLesson6
haris-waqar
2024-11-13T11:17:28Z
164
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2024-11-13T10:03:34Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: TrimLesson6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TrimLesson6 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2638 - Accuracy: 0.9020 - F1-score: 0.8990 - Recall-score: 0.9020 - Precision-score: 0.9085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1-score | Recall-score | Precision-score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:------------:|:---------------:| | 3.0068 | 1.0 | 427 | 3.0157 | 0.2007 | 0.1021 | 0.2007 | 0.1514 | | 2.0798 | 2.0 | 854 | 1.8506 | 0.5478 | 0.4726 | 0.5478 | 0.4815 | | 1.5096 | 3.0 | 1281 | 1.1647 | 0.6963 | 0.6441 | 0.6963 | 0.6785 | | 0.8799 | 4.0 | 1708 | 0.8187 | 0.7240 | 0.6793 | 0.7240 | 0.7035 | | 1.8621 | 5.0 | 2135 | 0.7214 | 0.7409 | 0.7049 | 0.7409 | 0.7304 | | 0.9176 | 6.0 | 2562 | 0.6481 | 0.7564 | 0.7249 | 0.7564 | 0.7461 | | 0.7314 | 7.0 | 2989 | 0.5848 | 0.7695 | 0.7321 | 0.7695 | 0.7379 | | 0.2837 | 8.0 | 3416 | 0.5256 | 0.7858 | 0.7592 | 0.7858 | 0.7798 | | 0.5412 | 9.0 | 3843 | 0.5331 | 0.7852 | 0.7561 | 0.7852 | 0.7785 | | 0.545 | 10.0 | 4270 | 0.5223 | 0.7893 | 0.7590 | 0.7893 | 0.7974 | | 0.6444 | 11.0 | 4697 | 0.4780 | 0.8057 | 0.7896 | 0.8057 | 0.7977 | | 0.6496 | 12.0 | 5124 | 0.4717 | 0.8049 | 0.7771 | 0.8049 | 0.8083 | | 0.1724 | 13.0 | 5551 | 0.4521 | 0.8188 | 0.7994 | 0.8188 | 0.8357 | | 0.4841 | 14.0 | 5978 | 0.4289 | 0.8226 | 0.8109 | 0.8226 | 0.8309 | | 0.3883 | 15.0 | 6405 | 0.4123 | 0.8268 | 0.8075 | 0.8268 | 0.8255 | | 0.6509 | 16.0 | 6832 | 0.3927 | 0.8467 | 0.8400 | 0.8467 | 0.8559 | | 0.6592 | 17.0 | 7259 | 0.3711 | 0.8503 | 0.8415 | 0.8503 | 0.8617 | | 0.2939 | 18.0 | 7686 | 0.3645 | 0.8525 | 0.8368 | 0.8525 | 0.8687 | | 0.0568 | 19.0 | 8113 | 0.3307 | 0.8727 | 0.8675 | 0.8727 | 0.8806 | | 0.2942 | 20.0 | 8540 | 0.3354 | 0.8715 | 0.8668 | 0.8715 | 0.8800 | | 0.4429 | 21.0 | 8967 | 0.3063 | 0.8821 | 0.8775 | 0.8821 | 0.8892 | | 0.1764 | 22.0 | 9394 | 0.2903 | 0.8904 | 0.8849 | 0.8904 | 0.9002 | | 0.0734 | 23.0 | 9821 | 0.2816 | 0.8927 | 0.8873 | 0.8927 | 0.9007 | | 0.5793 | 24.0 | 10248 | 0.2635 | 0.9077 | 0.9062 | 0.9077 | 0.9092 | | 0.2896 | 25.0 | 10675 | 0.2638 | 0.9020 | 0.8990 | 0.9020 | 0.9085 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu118 - Datasets 2.20.0 - Tokenizers 0.20.0
Zekunli/qwen2.5-7b-alpaca-dsg
Zekunli
2024-11-13T11:15:55Z
37
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-13T11:07:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TucanoBR/XGBRegressor-text-filter
TucanoBR
2024-11-13T11:13:34Z
0
0
xgboost
[ "xgboost", "text-quality", "portuguese", "pt", "dataset:TucanoBR/GigaVerbo-Text-Filter", "arxiv:2411.07854", "license:apache-2.0", "region:us" ]
null
2024-06-07T15:44:12Z
--- license: apache-2.0 datasets: - TucanoBR/GigaVerbo-Text-Filter language: - pt metrics: - mse library_name: xgboost tags: - text-quality - portuguese --- # XGBRegressor-text-filter XGBRegressor-text-filter is a text-quality filter built on top of the [`xgboost`](https://xgboost.readthedocs.io/en/stable/) library. It uses the embeddings generated by [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) as a feature vector. This repository has the [source code](https://github.com/Nkluge-correa/Tucano) used to train this model. ## Usage Here's an example of how to use the XGBRegressor-text-filter: ```python from transformers import AutoTokenizer, AutoModel from xgboost import XGBRegressor import torch.nn.functional as F import torch def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/LaBSE") embedding_model = AutoModel.from_pretrained("sentence-transformers/LaBSE") device = ("cuda" if torch.cuda.is_available() else "cpu") embedding_model.to(device) bst_r = XGBRegressor({'device': device}) bst_r.load_model('/path/to/XGBRegressor-text-classifier.json') def score_text(text, model): encoded_input = tokenizer(text, padding=True, truncation=True, return_tensors='pt').to(device) with torch.no_grad(): model_output = embedding_model(**encoded_input) sentence_embedding = mean_pooling(model_output, encoded_input['attention_mask']) embedding = F.normalize(sentence_embedding, p=2, dim=1).numpy() score = model.predict(embedding)[0] return score score_text("Os tucanos sรฃo aves que correspondem ร  famรญlia Ramphastidae, vivem nas florestas tropicais da Amรฉrica Central e Amรฉrica do Sul. A famรญlia inclui cinco gรชneros e mais de quarenta espรฉcies diferentes. Possuem bicos notavelmente grandes e coloridos, que possuem a funรงรฃo de termorregulaรงรฃo para as muitas espรฉcies que passam muito tempo na copa da floresta exposta ao sol tropical quente.", bst_r) ``` ## Cite as ๐Ÿค— ```latex @misc{correa2024tucanoadvancingneuraltext, title={{Tucano: Advancing Neural Text Generation for Portuguese}}, author={Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza}, year={2024}, eprint={2411.07854}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2411.07854}, } ``` ## Aknowlegments We gratefully acknowledge the granted access to the [Marvin cluster](https://www.hpc.uni-bonn.de/en/systems/marvin) hosted by [University of Bonn](https://www.uni-bonn.de/en) along with the support provided by its High Performance Computing \& Analytics Lab. ## License XGBRegressor-text-filter is licensed under the Apache License, Version 2.0. For more details, see the [LICENSE](LICENSE) file.
siddharudhh/llama_3_2_3b_onlycat1
siddharudhh
2024-11-13T11:13:11Z
118
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/Llama-3.2-3B", "base_model:finetune:unsloth/Llama-3.2-3B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-11-13T11:09:15Z
--- base_model: unsloth/Llama-3.2-3B tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** siddharudhh - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
TucanoBR/BERTimbau-large-text-filter
TucanoBR
2024-11-13T11:12:39Z
117
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "text-quality", "portuguese", "pt", "dataset:TucanoBR/GigaVerbo-Text-Filter", "arxiv:2411.07854", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-10T17:32:14Z
--- license: apache-2.0 datasets: - TucanoBR/GigaVerbo-Text-Filter language: - pt metrics: - accuracy library_name: transformers pipeline_tag: text-classification tags: - text-quality - portuguese widget: - text: "Os tucanos sรฃo aves que correspondem ร  famรญlia Ramphastidae, vivem nas florestas tropicais da Amรฉrica Central e Amรฉrica do Sul. A famรญlia inclui cinco gรชneros e mais de quarenta espรฉcies diferentes. Possuem bicos notavelmente grandes e coloridos, que possuem a funรงรฃo de termorregulaรงรฃo para as muitas espรฉcies que passam muito tempo na copa da floresta exposta ao sol tropical quente." example_title: Sample 1 - text: "12 de marรงo de 2021 | Sรฃo Paulo 8 de agosto de 1999 | Porto Alegre 25 de dezembro de 2022 | Rio de Janeiro 17 de julho de 1985 | Lisboa 4 de outubro de 2010 | Belo Horizonte 23 de setembro de 1978 | Paris 14 de fevereiro de 2003 | Nova Iorque 19 de junho de 1994 | Brasรญlia 5 de novembro de 2009 | Curitiba 30 de abril de 2015 | Buenos Aires" example_title: Sample 2 --- # BERTimbau-large-text-filter BERTimbau-large-text-filter is a [BERT](https://huggingface.co/neuralmind/bert-large-portuguese-cased) model that can be used to score the quality of a given Portuguese text string. This model was trained on the [GigaVerbo-Text-Filter](https://huggingface.co/datasets/TucanoBR/GigaVerbo-Text-Filter) dataset. ## Details - **Size:** 334,398,466 parameters - **Dataset:** [GigaVerbo-Text-Filter](https://huggingface.co/datasets/TucanoBR/GigaVerbo-Text-Filter) - **Language:** Portuguese - **Number of Training Epochs:** 3 - **Batch size:** 128 - **Optimizer:** `torch.optim.AdamW` - **Learning Rate:** 4e-5 This repository has the [source code](https://github.com/Nkluge-correa/Tucano) used to train this model. ## Usage Here's an example of how to use the BERTimbau-large-text-filter: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification from transformers import TextClassificationPipeline import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained("TucanoBR/BERTimbau-large-text-filter") model = AutoModelForSequenceClassification.from_pretrained("TucanoBR/BERTimbau-large-text-filter") model.to(device) classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer, device=device) result = classifier("Os tucanos sรฃo aves que correspondem ร  famรญlia Ramphastidae, vivem nas florestas tropicais da Amรฉrica Central e Amรฉrica do Sul. A famรญlia inclui cinco gรชneros e mais de quarenta espรฉcies diferentes. Possuem bicos notavelmente grandes e coloridos, que possuem a funรงรฃo de termorregulaรงรฃo para as muitas espรฉcies que passam muito tempo na copa da floresta exposta ao sol tropical quente.") ``` ## Cite as ๐Ÿค— ```latex @misc{correa2024tucanoadvancingneuraltext, title={{Tucano: Advancing Neural Text Generation for Portuguese}}, author={Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza}, year={2024}, eprint={2411.07854}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2411.07854}, } ``` ## Aknowlegments We gratefully acknowledge the granted access to the [Marvin cluster](https://www.hpc.uni-bonn.de/en/systems/marvin) hosted by [University of Bonn](https://www.uni-bonn.de/en) along with the support provided by its High Performance Computing \& Analytics Lab. ## License BERTimbau-large-text-filter is licensed under the Apache License, Version 2.0. For more details, see the [LICENSE](LICENSE) file.
TucanoBR/XGBClassifier-text-filter
TucanoBR
2024-11-13T11:12:01Z
0
0
xgboost
[ "xgboost", "text-quality", "portuguese", "pt", "dataset:TucanoBR/GigaVerbo-Text-Filter", "arxiv:2411.07854", "license:apache-2.0", "region:us" ]
null
2024-06-07T14:32:34Z
--- license: apache-2.0 datasets: - TucanoBR/GigaVerbo-Text-Filter language: - pt metrics: - accuracy library_name: xgboost tags: - text-quality - portuguese --- # XGBClassifier-text-filter XGBClassifier-text-filter is a text-quality filter built on top of the [`xgboost`](https://xgboost.readthedocs.io/en/stable/) library. It uses the embeddings generated by [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) as a feature vector. This repository has the [source code](https://github.com/Nkluge-correa/Tucano) used to train this model. ## Usage Here's an example of how to use the XGBClassifier-text-filter: ```python from transformers import AutoTokenizer, AutoModel from xgboost import XGBClassifier import torch.nn.functional as F import torch def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/LaBSE") embedding_model = AutoModel.from_pretrained("sentence-transformers/LaBSE") device = ("cuda" if torch.cuda.is_available() else "cpu") embedding_model.to(device) bst = XGBClassifier({'device': device}) bst.load_model('/path/to/XGBClassifier-text-classifier.json') def score_text(text, model): encoded_input = tokenizer(text, padding=True, truncation=True, return_tensors='pt').to(device) with torch.no_grad(): model_output = embedding_model(**encoded_input) sentence_embedding = mean_pooling(model_output, encoded_input['attention_mask']) embedding = F.normalize(sentence_embedding, p=2, dim=1).numpy() score = model.predict(embedding)[0] return score score_text("Os tucanos sรฃo aves que correspondem ร  famรญlia Ramphastidae, vivem nas florestas tropicais da Amรฉrica Central e Amรฉrica do Sul. A famรญlia inclui cinco gรชneros e mais de quarenta espรฉcies diferentes. Possuem bicos notavelmente grandes e coloridos, que possuem a funรงรฃo de termorregulaรงรฃo para as muitas espรฉcies que passam muito tempo na copa da floresta exposta ao sol tropical quente.", bst) ``` ## Cite as ๐Ÿค— ```latex @misc{correa2024tucanoadvancingneuraltext, title={{Tucano: Advancing Neural Text Generation for Portuguese}}, author={Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza}, year={2024}, eprint={2411.07854}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2411.07854}, } ``` ## Aknowlegments We gratefully acknowledge the granted access to the [Marvin cluster](https://www.hpc.uni-bonn.de/en/systems/marvin) hosted by [University of Bonn](https://www.uni-bonn.de/en) along with the support provided by its High Performance Computing \& Analytics Lab. ## License XGBClassifier-text-filter is licensed under the Apache License, Version 2.0. For more details, see the [LICENSE](LICENSE) file.
TucanoBR/BERTimbau-base-text-filter
TucanoBR
2024-11-13T11:11:22Z
228
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "text-quality", "portuguese", "pt", "dataset:TucanoBR/GigaVerbo-Text-Filter", "arxiv:2411.07854", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-11T12:00:47Z
--- license: apache-2.0 datasets: - TucanoBR/GigaVerbo-Text-Filter language: - pt metrics: - accuracy library_name: transformers pipeline_tag: text-classification tags: - text-quality - portuguese widget: - text: "Os tucanos sรฃo aves que correspondem ร  famรญlia Ramphastidae, vivem nas florestas tropicais da Amรฉrica Central e Amรฉrica do Sul. A famรญlia inclui cinco gรชneros e mais de quarenta espรฉcies diferentes. Possuem bicos notavelmente grandes e coloridos, que possuem a funรงรฃo de termorregulaรงรฃo para as muitas espรฉcies que passam muito tempo na copa da floresta exposta ao sol tropical quente." example_title: Sample 1 - text: "12 de marรงo de 2021 | Sรฃo Paulo 8 de agosto de 1999 | Porto Alegre 25 de dezembro de 2022 | Rio de Janeiro 17 de julho de 1985 | Lisboa 4 de outubro de 2010 | Belo Horizonte 23 de setembro de 1978 | Paris 14 de fevereiro de 2003 | Nova Iorque 19 de junho de 1994 | Brasรญlia 5 de novembro de 2009 | Curitiba 30 de abril de 2015 | Buenos Aires" example_title: Sample 2 --- # BERTimbau-base-text-filter BERTimbau-base-text-filter is a [BERT](https://huggingface.co/neuralmind/bert-base-portuguese-cased) model that can be used to score the quality of a given Portuguese text string. This model was trained on the [GigaVerbo-Text-Filter](https://huggingface.co/datasets/TucanoBR/GigaVerbo-Text-Filter) dataset. ## Details - **Size:** 109,038,209 parameters - **Dataset:** [GigaVerbo-Text-Filter](https://huggingface.co/datasets/TucanoBR/GigaVerbo-Text-Filter) - **Language:** Portuguese - **Number of Training Epochs:** 3 - **Batch size:** 128 - **Optimizer:** `torch.optim.AdamW` - **Learning Rate:** 4e-5 This repository has the [source code](https://github.com/Nkluge-correa/Tucano) used to train this model. ## Usage Here's an example of how to use the BERTimbau-base-text-filter: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification from transformers import TextClassificationPipeline import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained("TucanoBR/BERTimbau-base-text-filter") model = AutoModelForSequenceClassification.from_pretrained("TucanoBR/BERTimbau-base-text-filter") model.to(device) classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer, device=device) result = classifier("Os tucanos sรฃo aves que correspondem ร  famรญlia Ramphastidae, vivem nas florestas tropicais da Amรฉrica Central e Amรฉrica do Sul. A famรญlia inclui cinco gรชneros e mais de quarenta espรฉcies diferentes. Possuem bicos notavelmente grandes e coloridos, que possuem a funรงรฃo de termorregulaรงรฃo para as muitas espรฉcies que passam muito tempo na copa da floresta exposta ao sol tropical quente.") ``` ## Cite as ๐Ÿค— ```latex @misc{correa2024tucanoadvancingneuraltext, title={{Tucano: Advancing Neural Text Generation for Portuguese}}, author={Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza}, year={2024}, eprint={2411.07854}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2411.07854}, } ``` ## Aknowlegments We gratefully acknowledge the granted access to the [Marvin cluster](https://www.hpc.uni-bonn.de/en/systems/marvin) hosted by [University of Bonn](https://www.uni-bonn.de/en) along with the support provided by its High Performance Computing \& Analytics Lab. ## License BERTimbau-base-text-filter is licensed under the Apache License, Version 2.0. For more details, see the [LICENSE](LICENSE) file.
hugosousa/classifier_smoll_135m
hugosousa
2024-11-13T11:10:42Z
32
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "best_valid_loss", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-10-30T14:49:35Z
--- library_name: transformers tags: - best_valid_loss --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Rohan-G/bnb_4_bit_quantization
Rohan-G
2024-11-13T11:05:38Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-11-13T11:00:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vedikagoyal150903/custom-llama-code-generator
vedikagoyal150903
2024-11-13T11:00:33Z
119
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-13T10:56:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xxhe/esci-dpo-mistral-7b-instruct-iter-3
xxhe
2024-11-13T10:57:56Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-13T10:55:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
falan42/llama_lora_8b_medical_parallax_2_gguf
falan42
2024-11-13T10:46:08Z
54
1
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1", "base_model:quantized:ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-13T10:44:43Z
--- base_model: ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1 tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** falan42 - **License:** apache-2.0 - **Finetuned from model :** ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/diffusionfamily_-_diffullama-gguf
RichardErkhov
2024-11-13T10:42:35Z
45
0
null
[ "gguf", "arxiv:2410.17891", "endpoints_compatible", "region:us" ]
null
2024-11-13T06:51:04Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) diffullama - GGUF - Model creator: https://huggingface.co/diffusionfamily/ - Original model: https://huggingface.co/diffusionfamily/diffullama/ | Name | Quant method | Size | | ---- | ---- | ---- | | [diffullama.Q2_K.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q2_K.gguf) | Q2_K | 2.36GB | | [diffullama.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [diffullama.Q3_K.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q3_K.gguf) | Q3_K | 3.07GB | | [diffullama.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [diffullama.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [diffullama.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [diffullama.Q4_0.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q4_0.gguf) | Q4_0 | 3.56GB | | [diffullama.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.IQ4_NL.gguf) | IQ4_NL | 3.58GB | | [diffullama.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [diffullama.Q4_K.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q4_K.gguf) | Q4_K | 3.8GB | | [diffullama.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [diffullama.Q4_1.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q4_1.gguf) | Q4_1 | 3.95GB | | [diffullama.Q5_0.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q5_0.gguf) | Q5_0 | 4.33GB | | [diffullama.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [diffullama.Q5_K.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q5_K.gguf) | Q5_K | 4.45GB | | [diffullama.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q5_K_M.gguf) | Q5_K_M | 4.45GB | | [diffullama.Q5_1.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q5_1.gguf) | Q5_1 | 4.72GB | | [diffullama.Q6_K.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q6_K.gguf) | Q6_K | 5.15GB | | [diffullama.Q8_0.gguf](https://huggingface.co/RichardErkhov/diffusionfamily_-_diffullama-gguf/blob/main/diffullama.Q8_0.gguf) | Q8_0 | 6.67GB | Original model description: --- library_name: transformers base_model: - meta-llama/Llama-2-7b-hf tags: - llama-factory - full - diffusion model-index: - name: diffullama results: [] license: apache-2.0 datasets: - bigcode/starcoderdata - cerebras/SlimPajama-627B --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # diffullama This model is a fine-tuned version of [llama2]. ## Model description Details and model loading can be seen [https://github.com/HKUNLP/DiffuLLaMA](https://github.com/HKUNLP/DiffuLLaMA). ### Framework versions - Transformers 4.44.2 - Pytorch 2.1.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1 ``` @misc{gong2024scalingdiffusionlanguagemodels, title={Scaling Diffusion Language Models via Adaptation from Autoregressive Models}, author={Shansan Gong and Shivam Agarwal and Yizhe Zhang and Jiacheng Ye and Lin Zheng and Mukai Li and Chenxin An and Peilin Zhao and Wei Bi and Jiawei Han and Hao Peng and Lingpeng Kong}, year={2024}, eprint={2410.17891}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.17891}, } ```
Beehzod/Uzbek_SpeechT5_TTS_Fine-tuning_faster
Beehzod
2024-11-13T10:35:11Z
77
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-11-13T09:39:43Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer datasets: - common_voice_17_0 model-index: - name: Uzbek_SpeechT5_TTS_Fine-tuning_faster results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Uzbek_SpeechT5_TTS_Fine-tuning_faster This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_17_0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.47.0.dev0 - Pytorch 2.5.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
amitk23/TKG3
amitk23
2024-11-13T10:28:14Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-13T10:24:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Rarti/phi3.5_lora_merged
Rarti
2024-11-13T10:28:13Z
7
0
null
[ "safetensors", "phi3", "llama-factory", "custom_code", "license:apache-2.0", "region:us" ]
null
2024-11-13T10:20:32Z
--- license: apache-2.0 tags: - llama-factory ---
cuongdev/tsetmoi-1111
cuongdev
2024-11-13T10:22:26Z
35
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-11-13T10:16:35Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### tsetmoi-1111 Dreambooth model trained by cuongdev with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
ksathyan/vicuna-merged-new
ksathyan
2024-11-13T10:19:54Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-11-13T10:15:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ishitangupta/fastedit-model-32
ishitangupta
2024-11-13T10:16:25Z
31
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-11-13T10:09:09Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿงจ diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
VLKVLK/media-file-recognizer-tiny-llama-1.1b-v2
VLKVLK
2024-11-13T10:09:01Z
120
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-13T10:06:50Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mav23/Bielik-11B-v2-GGUF
mav23
2024-11-13T10:08:41Z
67
0
transformers
[ "transformers", "gguf", "pl", "arxiv:2410.18565", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-11-13T08:48:05Z
--- license: apache-2.0 language: - pl library_name: transformers inference: parameters: temperature: 0.9 extra_gated_description: If you want to learn more about how you can use the model, please refer to our <a href="https://bielik.ai/terms/">Terms of Use</a>. --- <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-11B-v2/raw/main/speakleash_cyfronet.png"> </p> # Bielik-11B-v2 Bielik-11B-v2 is a generative text model featuring 11 billion parameters. It is initialized from its predecessor, Mistral-7B-v0.2, and trained on 400 billion tokens. The aforementioned model stands as a testament to the unique collaboration between the open-science/open-source project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on Polish text corpora, which have been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC center: ACK Cyfronet AGH. The creation and training of the Bielik-11B-v2 was propelled by the support of computational grant number PLG/2024/016951, conducted on the Athena and Helios supercomputer, enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes. As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision. โš ๏ธ This is a base model intended for further fine-tuning across most use cases. If you're looking for a model ready for chatting or following instructions out-of-the-box, please use [Bielik-11B-v.2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct). ๐ŸŽฅ Demo: https://chat.bielik.ai ๐Ÿ—ฃ๏ธ Chat Arena<span style="color:red;">*</span>: https://arena.speakleash.org.pl/ <span style="color:red;">*</span>Chat Arena is a platform for testing and comparing different AI language models, allowing users to evaluate their performance and quality. ## Model Bielik-11B-v2 has been trained with [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) using different parallelization techniques. The model training was conducted on the Helios Supercomputer at the ACK Cyfronet AGH, utilizing 256 NVidia GH200 cards. The training dataset was composed of Polish texts collected and made available through the [SpeakLeash](https://speakleash.org/) project, as well as a subset of CommonCrawl data. We used 200 billion tokens (over 700 GB of plain text) for two epochs of training. ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Initialized from:** [Mistral-7B-v0.2](https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) * **Model ref:** speakleash:45b6efdb701991181a05968fc53d2a8e ### Quality evaluation An XGBoost classification model was prepared and created to evaluate the quality of texts in native Polish language. It is based on 93 features, such as the ratio of out-of-vocabulary words to all words (OOVs), the number of nouns, verbs, average sentence length etc. The model outputs the category of a given document (either HIGH, MEDIUM or LOW) along with the probability. This approach allows implementation of a dedicated pipeline to choose documents, from which we've used entries with HIGH quality index and probability exceeding 90%. This filtration and appropriate selection of texts enable the provision of a condensed and high-quality database of texts in Polish for training purposes. ### Quickstart This model can be easily loaded using the AutoModelForCausalLM functionality. ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "speakleash/Bielik-11B-v2" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` In order to reduce the memory usage, you can use smaller precision (`bfloat16`). ```python import torch model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) ``` And then you can use HuggingFace Pipelines to generate text: ```python import transformers text = "Najwaลผniejszym celem czล‚owieka na ziemi jest" pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer) sequences = pipeline(max_new_tokens=100, do_sample=True, top_k=50, eos_token_id=tokenizer.eos_token_id, text_inputs=text) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` Generated output: > Najwaลผniejszym celem czล‚owieka na ziemi jest ลผycie w pokoju, harmonii i miล‚oล›ci. Dla kaลผdego z nas bardzo waลผne jest, aby otaczaฤ‡ siฤ™ kochanymi osobami. ## Evaluation Models have been evaluated on two leaderboards: [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) and [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The Open PL LLM Leaderboard uses a 5-shot evaluation and focuses on NLP tasks in Polish, while the Open LLM Leaderboard evaluates models on various English language tasks. ### Open PL LLM Leaderboard The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Average column is an average score among all tasks normalized by baseline scores. | Model | Parameters (B) | Average | |------------------------|------------|---------| | Meta-Llama-3-70B | 70 | 62.07 | | Qwen1.5-72B | 72 | 61.11 | | Meta-Llama-3.1-70B | 70 | 60.87 | | Mixtral-8x22B-v0.1 | 141 | 60.75 | | Qwen1.5-32B | 32 | 58.71 | | **Bielik-11B-v2** | **11** | **58.14** | | Qwen2-7B | 7 | 49.39 | | SOLAR-10.7B-v1.0 | 10.7 | 47.54 | | Mistral-Nemo-Base-2407 | 12 | 47.28 | | internlm2-20b | 20 | 47.15 | | Meta-Llama-3.1-8B | 8 | 43.77 | | Meta-Llama-3-8B | 8 | 43.30 | | Mistral-7B-v0.2 | 7 | 38.81 | | Bielik-7B-v0.1 | 7 | 34.34 | | Qra-13b | 13 | 33.90 | | Qra-7b | 7 | 16.60 | The results from the Open PL LLM Leaderboard show that the Bielik-11B-v2 model, with 11 billion parameters, achieved an average score of 58.14. This makes it the best performing model among those under 20B parameters, outperforming the second-best model in this category by an impressive 8.75 percentage points. This significant lead not only places it ahead of its predecessor, the Bielik-7B-v0.1 (which scored 34.34), but also demonstrates its superiority over other larger models. The substantial improvement highlights the remarkable advancements and optimizations made in this newer version. Other Polish models listed include Qra-13b and Qra-7b, scoring 33.90 and 16.60 respectively, indicating that Bielik-11B-v2 outperforms these models by a considerable margin. Additionally, the Bielik-11B-v2 was initialized from the weights of Mistral-7B-v0.2, which itself scored 38.81, further demonstrating the effective enhancements incorporated into the Bielik-11B-v2 model. ### Open LLM Leaderboard The Open LLM Leaderboard evaluates models on various English language tasks, providing insights into the model's performance across different linguistic challenges. | Model | AVG | arc_challenge | hellaswag | truthfulqa_mc2 | mmlu | winogrande | gsm8k | |-------------------------|-------|---------------|-----------|----------------|-------|------------|-------| | **Bielik-11B-v2** | **65.87** | 60.58 | 79.84 | 46.13 | 63.06 | 77.82 | 67.78 | | Mistral-7B-v0.2 | 60.37 | 60.84 | 83.08 | 41.76 | 63.62 | 78.22 | 34.72 | | Bielik-7B-v0.1 | 49.98 | 45.22 | 67.92 | 47.16 | 43.20 | 66.85 | 29.49 | The results from the Open LLM Leaderboard demonstrate the impressive performance of Bielik-11B-v2 across various NLP tasks. With an average score of 65.87, it significantly outperforms its predecessor, Bielik-7B-v0.1, and even surpasses Mistral-7B-v0.2, which served as its initial weight basis. Key observations: 1. Bielik-11B-v2 shows substantial improvements in most categories compared to Bielik-7B-v0.1, highlighting the effectiveness of the model's enhancements. 2. It performs exceptionally well in tasks like hellaswag (common sense reasoning), winogrande (commonsense reasoning), and gsm8k (mathematical problem-solving), indicating its versatility across different types of language understanding and generation tasks. 3. While Mistral-7B-v0.2 outperforms in truthfulqa_mc2, Bielik-11B-v2 maintains competitive performance in this truth-discernment task. Although Bielik-11B-v2 was primarily trained on Polish data, it has retained and even improved its ability to understand and operate in English, as evidenced by its strong performance across these English-language benchmarks. This suggests that the model has effectively leveraged cross-lingual transfer learning, maintaining its Polish language expertise while enhancing its English language capabilities. ## Limitations and Biases Bielik-11B-v2 is not intended for deployment without fine-tuning. It should not be used for human-facing interactions without further guardrails and user consent. Bielik-11B-v2 can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-11B-v2 was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs. ## Citation Please cite this model using the following format: ``` @misc{Bielik11Bv2b, title = {Bielik-11B-v2 model card}, author = {Ociepa, Krzysztof and Flis, ลukasz and Wrรณbel, Krzysztof and Gwoลบdziej, Adrian and {SpeakLeash Team} and {Cyfronet Team}}, year = {2024}, url = {https://huggingface.co/speakleash/Bielik-11B-v2}, note = {Accessed: 2024-08-28}, urldate = {2024-08-28} } @unpublished{Bielik11Bv2a, author = {Ociepa, Krzysztof and Flis, ลukasz and Kinas, Remigiusz and Gwoลบdziej, Adrian and Wrรณbel, Krzysztof}, title = {Bielik: A Family of Large Language Models for the Polish Language - Development, Insights, and Evaluation}, year = {2024}, } @misc{ociepa2024bielik7bv01polish, title={Bielik 7B v0.1: A Polish Language Model -- Development, Insights, and Evaluation}, author={Krzysztof Ociepa and ลukasz Flis and Krzysztof Wrรณbel and Adrian Gwoลบdziej and Remigiusz Kinas}, year={2024}, eprint={2410.18565}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.18565}, } ``` ## Responsible for training the model * [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, data preparation, process optimization and oversight of training * [ลukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)<sup>Cyfronet AGH</sup> - coordinating and supervising the training * [Adrian Gwoลบdziej](https://www.linkedin.com/in/adrgwo/)<sup>SpeakLeash</sup> - data cleaning and quality * [Krzysztof Wrรณbel](https://www.linkedin.com/in/wrobelkrzysztof/)<sup>SpeakLeash</sup> - benchmarks The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model: [Grzegorz Urbanowicz](https://www.linkedin.com/in/grzegorz-urbanowicz-05823469/), [Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/), [Jacek Chwiล‚a](https://www.linkedin.com/in/jacek-chwila/), [Szymon Baczyล„ski](https://www.linkedin.com/in/szymon-baczynski/), [Paweล‚ Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/), [Aleksander Smywiล„ski-Pohl](https://www.linkedin.com/in/apohllo/). Members of the ACK Cyfronet AGH team providing valuable support and expertise: [Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/), [Marek Magryล›](https://www.linkedin.com/in/magrys/). ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/pv4brQMDTy).
Helios9/NCBI_NER
Helios9
2024-11-13T10:08:08Z
21
1
null
[ "safetensors", "deberta-v2", "NER", "phenotypes", "diseases", "bio", "classification", "token-classification", "en", "dataset:ncbi/pubmed", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:unknown", "region:us" ]
token-classification
2024-11-13T09:51:00Z
--- license: unknown datasets: - ncbi/pubmed language: - en metrics: - f1 base_model: - microsoft/deberta-v3-base pipeline_tag: token-classification tags: - NER - phenotypes - diseases - bio - classification --- **How to Use the Model for Inference:** You can use the Hugging Face `pipeline` for easy inference: ```python from transformers import pipeline # Load the model model_path = "venkatd/NCBI_NER" pipe = pipeline( task="token-classification", model=model_path, tokenizer=model_path, aggregation_strategy="simple" ) # Test the pipeline text = ("A 48-year-old female presented with vaginal bleeding and abnormal Pap smears. " "Upon diagnosis of invasive non-keratinizing SCC of the cervix, she underwent a radical " "hysterectomy with salpingo-oophorectomy which demonstrated positive spread to the pelvic " "lymph nodes and the parametrium.") result = pipe(text) print(result) ``` **Output Example:** The output will be entity type of Disease, score, and start/end positions in the text. Hereโ€™s a sample output format: ```json [ { "entity_group": "Disease", "score": 0.98, "word": "SCC of the cervix", "start": 121, "end": 139 }, ... ] ``` **Model Summary and Training Details** ### Model Architecture - **Base Model**: `microsoft/deberta-v3-base` - **Task**: Token Classification for Named Entity Recognition (NER) with a focus on disease entities. - **Number of Labels**: 3 (O, B-Disease, I-Disease) ### Dataset - **Dataset**: NCBI Disease Corpus - **Description**: The NCBI Disease corpus is a specialized medical dataset that includes 793 PubMed abstracts. It is structured to help in identifying disease mentions within scientific literature, and each mention is annotated with disease concepts from the MeSH (Medical Subject Headings) or OMIM (Online Mendelian Inheritance in Man) databases. - **Split**: - Training Set: 593 abstracts - Development (Validation) Set: 100 abstracts - Test Set: 100 abstracts ### Training Details - **Training Steps**: The model was trained using a cross-entropy loss function for token classification tasks. To optimize performance, we used gradient accumulation to achieve a stable loss and improve resource efficiency. - **Gradient Accumulation**: 2 steps - **Batch Size**: 8 - **Device**: Trained on a GPU if available, using mixed-precision training for better performance. ### Optimizer and Learning Rate Scheduler - **Optimizer**: AdamW - **Learning Rate**: 1e-5 - **Betas**: (0.9, 0.999) - **Epsilon**: 1e-8 - **Learning Rate Scheduler**: Cosine Scheduler with Warmup - **Warmup Steps**: 10% of total training steps - **Total Training Steps**: Calculated as `len(train_loader) * num_epochs` ### Epochs and Validation - **Epochs**: 5 - **Training and Validation Loss**: The model achieved a stable loss over 5 epochs, with the best validation loss recorded. The best model based on validation loss was saved for evaluation. ### Evaluation and Performance - **Test Dataset F1 Score**: 0.9772 - **Evaluation Metric**: F1 score, which indicates the balance between precision and recall, was used as the primary metric to assess the modelโ€™s performance.
Rohan-G/partial_quantization_from_scratch
Rohan-G
2024-11-13T10:07:46Z
153
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2024-11-13T09:58:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NiloofarMomeni/distilhubert-finetuned-VD
NiloofarMomeni
2024-11-13T10:04:54Z
162
0
transformers
[ "transformers", "safetensors", "hubert", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:ntu-spml/distilhubert", "base_model:finetune:ntu-spml/distilhubert", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-06-03T14:32:16Z
--- license: apache-2.0 base_model: ntu-spml/distilhubert tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: distilhubert-finetuned-VD results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.8933256172839507 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilhubert-finetuned-VD This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 0.7226 - Accuracy: 0.8933 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 25 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3302 | 1.0 | 195 | 0.3716 | 0.8800 | | 0.6059 | 2.0 | 390 | 0.5195 | 0.8090 | | 0.4938 | 3.0 | 585 | 1.0102 | 0.6260 | | 0.836 | 4.0 | 780 | 1.1662 | 0.6742 | | 0.2234 | 5.0 | 975 | 0.6792 | 0.8389 | | 0.1444 | 6.0 | 1170 | 0.9137 | 0.8239 | | 0.2986 | 7.0 | 1365 | 0.7987 | 0.8623 | | 0.0004 | 8.0 | 1560 | 1.5075 | 0.7687 | | 0.0005 | 9.0 | 1755 | 0.7226 | 0.8933 | | 0.0002 | 10.0 | 1950 | 0.8246 | 0.8829 | | 0.0002 | 11.0 | 2145 | 1.4227 | 0.8129 | | 0.0001 | 12.0 | 2340 | 1.0478 | 0.8665 | | 0.0001 | 13.0 | 2535 | 1.3328 | 0.8322 | | 0.0001 | 14.0 | 2730 | 1.3480 | 0.8347 | | 0.0001 | 15.0 | 2925 | 1.3559 | 0.8370 | | 0.0 | 16.0 | 3120 | 1.3589 | 0.8407 | | 0.0 | 17.0 | 3315 | 1.3706 | 0.8410 | | 0.0 | 18.0 | 3510 | 1.3831 | 0.8410 | | 0.0 | 19.0 | 3705 | 1.3954 | 0.8410 | | 0.0 | 20.0 | 3900 | 1.4027 | 0.8412 | | 0.0 | 21.0 | 4095 | 1.4132 | 0.8409 | | 0.0 | 22.0 | 4290 | 1.4218 | 0.8407 | | 0.0 | 23.0 | 4485 | 1.4272 | 0.8407 | | 0.0 | 24.0 | 4680 | 1.4321 | 0.8399 | | 0.0 | 25.0 | 4875 | 1.4337 | 0.8399 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
Shah1st/mountain-ner-model
Shah1st
2024-11-13T10:03:46Z
106
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-10-16T22:14:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This project involves fine-tuning a BERT-based model (dslim/bert-large-NER) to perform Named Entity Recognition (NER) on mountain names in text. The model has been trained to identify mentions of mountain names and differentiate them from other geographic entities or non-entities. Features: Fine-tuned on a custom dataset that includes sentences both with and without mountain names. Uses focal loss to handle class imbalance, which ensures the model focuses on correctly classifying rare mountain names. Token-level classification for identifying the B-MOUNTAIN, I-MOUNTAIN, and O (non-entity) labels. Balances training between sentences with mountains (80%) and without mountains (20%). ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. - **Developed by:** Oleksandr Kharytonov - **Model type:** BERT - **Language(s) (NLP):** Python - **License:** MIT - **Finetuned from model [optional]:** https://huggingface.co/dslim/bert-large-NER - ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/Shah1st/mountain-ner ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained('./saved_model') model = AutoModelForTokenClassification.from_pretrained('./saved_model') ``` ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## How to Get Started with the Model Use the github below to get started with the model. https://github.com/Shah1st/mountain-ner ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> "DFKI-SLT/few-nerd", "supervised" Filter for sentences with 'fine_ner_tags' == 24 (mountains) ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> 'eval_loss': 0.009154710918664932, 'eval_macro_f1': 0.8952192988290304, 'eval_accuracy': 0.9746226793108054 ### Testing Data, Factors & Metrics #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> macro F1: 0.895 Accuracy: 0.974 #### Summary This project involves fine-tuning a BERT-based model (dslim/bert-large-NER) to perform Named Entity Recognition (NER) on mountain names in text. The model has been trained to identify mentions of mountain names and differentiate them from other geographic entities or non-entities.