modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-26 06:27:38
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
496 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-26 06:27:10
card
stringlengths
11
1.01M
appvoid/merging-x
appvoid
2024-05-06T04:38:14Z
144
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:appvoid/palmer-003", "base_model:finetune:appvoid/palmer-003", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T04:37:19Z
--- base_model: - appvoid/palmer-003 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: appvoid/palmer-003 layer_range: [0, 5] - sources: - model: appvoid/palmer-003 layer_range: [3, 10] - sources: - model: appvoid/palmer-003 layer_range: [6, 15] - sources: - model: appvoid/palmer-003 layer_range: [9, 20] - sources: - model: appvoid/palmer-003 layer_range: [12, 22] merge_method: passthrough dtype: float16 ```
mradermacher/llama-2-7b-cars-v3-GGUF
mradermacher
2024-05-06T04:38:11Z
4
0
transformers
[ "transformers", "gguf", "en", "base_model:Vignav/llama-2-7b-cars-v3", "base_model:quantized:Vignav/llama-2-7b-cars-v3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-20T06:25:35Z
--- base_model: Vignav/llama-2-7b-cars-v3 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Vignav/llama-2-7b-cars-v3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.IQ3_M.gguf) | IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama-2-7b-cars-v3-GGUF/resolve/main/llama-2-7b-cars-v3.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/deepseek-llm-67b-base-GGUF
mradermacher
2024-05-06T04:38:05Z
131
0
transformers
[ "transformers", "gguf", "en", "base_model:deepseek-ai/deepseek-llm-67b-base", "base_model:quantized:deepseek-ai/deepseek-llm-67b-base", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-20T10:55:03Z
--- base_model: deepseek-ai/deepseek-llm-67b-base language: - en library_name: transformers license: other license_link: LICENSE license_name: deepseek quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/deepseek-ai/deepseek-llm-67b-base <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/deepseek-llm-67b-base-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.Q2_K.gguf) | Q2_K | 25.2 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.IQ3_XS.gguf) | IQ3_XS | 28.0 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.Q3_K_S.gguf) | Q3_K_S | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.IQ3_S.gguf) | IQ3_S | 29.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.IQ3_M.gguf) | IQ3_M | 30.6 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.Q3_K_M.gguf) | Q3_K_M | 32.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.Q3_K_L.gguf) | Q3_K_L | 35.7 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.IQ4_XS.gguf) | IQ4_XS | 36.6 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.Q4_K_S.gguf) | Q4_K_S | 38.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.Q4_K_M.gguf) | Q4_K_M | 40.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.Q5_K_S.gguf) | Q5_K_S | 46.6 | | | [GGUF](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.Q5_K_M.gguf) | Q5_K_M | 47.8 | | | [PART 1](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.Q6_K.gguf.part2of2) | Q6_K | 55.4 | very good quality | | [PART 1](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/deepseek-llm-67b-base-GGUF/resolve/main/deepseek-llm-67b-base.Q8_0.gguf.part2of2) | Q8_0 | 71.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Prokaryote-8x7B-bf16-GGUF
mradermacher
2024-05-06T04:38:03Z
4
0
transformers
[ "transformers", "gguf", "merge", "moe", "en", "base_model:Kquant03/Prokaryote-8x7B-bf16", "base_model:quantized:Kquant03/Prokaryote-8x7B-bf16", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-20T12:02:00Z
--- base_model: Kquant03/Prokaryote-8x7B-bf16 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - moe --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Kquant03/Prokaryote-8x7B-bf16 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.Q2_K.gguf) | Q2_K | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.IQ3_XS.gguf) | IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.Q3_K_S.gguf) | Q3_K_S | 20.5 | | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.IQ3_M.gguf) | IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.Q3_K_L.gguf) | Q3_K_L | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.IQ4_XS.gguf) | IQ4_XS | 25.5 | | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.Q5_K_S.gguf) | Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.Q5_K_M.gguf) | Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.Q6_K.gguf) | Q6_K | 38.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Prokaryote-8x7B-bf16-GGUF/resolve/main/Prokaryote-8x7B-bf16.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Mira-70B-v0.8-GGUF
mradermacher
2024-05-06T04:37:54Z
13
0
transformers
[ "transformers", "gguf", "llama", "ru", "base_model:gotzmann/Mira-70B-v0.8", "base_model:quantized:gotzmann/Mira-70B-v0.8", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T14:23:08Z
--- base_model: gotzmann/Mira-70B-v0.8 language: - ru library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - llama --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/gotzmann/Mira-70B-v0.8 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.Q2_K.gguf) | Q2_K | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.IQ3_XS.gguf) | IQ3_XS | 28.4 | | | [GGUF](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.Q3_K_S.gguf) | Q3_K_S | 30.0 | | | [GGUF](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.IQ3_M.gguf) | IQ3_M | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.Q3_K_L.gguf) | Q3_K_L | 36.2 | | | [GGUF](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.IQ4_XS.gguf) | IQ4_XS | 37.3 | | | [GGUF](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.Q5_K_S.gguf) | Q5_K_S | 47.6 | | | [GGUF](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.Q5_K_M.gguf) | Q5_K_M | 48.9 | | | [PART 1](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Mira-70B-v0.8-GGUF/resolve/main/Mira-70B-v0.8.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/WestStarling-7B-slerp-GGUF
mradermacher
2024-05-06T04:37:46Z
94
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "senseable/WestLake-7B-v2", "Nexusflow/Starling-LM-7B-beta", "en", "base_model:Sanaullah06/WestStarling-7B-slerp", "base_model:quantized:Sanaullah06/WestStarling-7B-slerp", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T18:16:12Z
--- base_model: Sanaullah06/WestStarling-7B-slerp language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - senseable/WestLake-7B-v2 - Nexusflow/Starling-LM-7B-beta --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Sanaullah06/WestStarling-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/WestStarling-7B-slerp-GGUF/resolve/main/WestStarling-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Tia-70B-RP-GGUF
mradermacher
2024-05-06T04:37:33Z
1
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:Dogge/Tia-70B-RP", "base_model:quantized:Dogge/Tia-70B-RP", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-20T20:17:19Z
--- base_model: Dogge/Tia-70B-RP language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Dogge/Tia-70B-RP <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Tia-70B-RP-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.Q5_K_M.gguf) | Q5_K_M | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Tia-70B-RP-GGUF/resolve/main/Tia-70B-RP.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF
mradermacher
2024-05-06T04:37:17Z
3
0
transformers
[ "transformers", "gguf", "en", "base_model:WesPro/Llama3-RPLoRa-SmaugOrpo", "base_model:quantized:WesPro/Llama3-RPLoRa-SmaugOrpo", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-21T03:30:59Z
--- base_model: WesPro/Llama3-RPLoRa-SmaugOrpo language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/WesPro/Llama3-RPLoRa-SmaugOrpo <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-RPLoRa-SmaugOrpo-GGUF/resolve/main/Llama3-RPLoRa-SmaugOrpo.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Aura_Uncensored_l3_8B-GGUF
mradermacher
2024-05-06T04:37:09Z
149
2
transformers
[ "transformers", "gguf", "en", "base_model:ResplendentAI/Aura_Uncensored_l3_8B", "base_model:quantized:ResplendentAI/Aura_Uncensored_l3_8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-21T03:34:46Z
--- base_model: ResplendentAI/Aura_Uncensored_l3_8B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/ResplendentAI/Aura_Uncensored_l3_8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Aura_Uncensored_l3_8B-GGUF/resolve/main/Aura_Uncensored_l3_8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Apollo-7B-GGUF
mradermacher
2024-05-06T04:36:47Z
42
2
transformers
[ "transformers", "gguf", "en", "base_model:FreedomIntelligence/Apollo-7B", "base_model:quantized:FreedomIntelligence/Apollo-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-21T12:10:58Z
--- base_model: FreedomIntelligence/Apollo-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/FreedomIntelligence/Apollo-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q2_K.gguf) | Q2_K | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.IQ3_XS.gguf) | IQ3_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.IQ3_S.gguf) | IQ3_S | 4.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q3_K_S.gguf) | Q3_K_S | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.IQ3_M.gguf) | IQ3_M | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q3_K_M.gguf) | Q3_K_M | 4.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q3_K_L.gguf) | Q3_K_L | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.IQ4_XS.gguf) | IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q4_K_S.gguf) | Q4_K_S | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q4_K_M.gguf) | Q4_K_M | 5.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q5_K_S.gguf) | Q5_K_S | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q5_K_M.gguf) | Q5_K_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q6_K.gguf) | Q6_K | 7.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Apollo-7B-GGUF/resolve/main/Apollo-7B.Q8_0.gguf) | Q8_0 | 9.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Hyper-L3-GGUF
mradermacher
2024-05-06T04:36:42Z
20
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:anhnv125/Hyper-L3", "base_model:quantized:anhnv125/Hyper-L3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-21T16:23:28Z
--- base_model: anhnv125/Hyper-L3 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/anhnv125/Hyper-L3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hyper-L3-GGUF/resolve/main/Hyper-L3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF
mradermacher
2024-05-06T04:36:02Z
37
0
transformers
[ "transformers", "gguf", "base_model:mlinmg/SG-Raccoon-Yi-55B-200k", "base_model:quantized:mlinmg/SG-Raccoon-Yi-55B-200k", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-22T11:31:59Z
--- base_model: mlinmg/SG-Raccoon-Yi-55B-200k language: - en, library_name: transformers license: other license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE license_name: yi-license quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/mlinmg/SG-Raccoon-Yi-55B-200k <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-IQ1_S.gguf) | i1-IQ1_S | 12.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-IQ1_M.gguf) | i1-IQ1_M | 13.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-IQ2_XS.gguf) | i1-IQ2_XS | 16.6 | | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-IQ2_S.gguf) | i1-IQ2_S | 17.6 | | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-IQ2_M.gguf) | i1-IQ2_M | 19.0 | | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-Q2_K.gguf) | i1-Q2_K | 20.7 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 21.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-IQ3_XS.gguf) | i1-IQ3_XS | 23.0 | | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-Q3_K_S.gguf) | i1-Q3_K_S | 24.2 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-IQ3_S.gguf) | i1-IQ3_S | 24.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-IQ3_M.gguf) | i1-IQ3_M | 25.2 | | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-Q3_K_M.gguf) | i1-Q3_K_M | 27.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-Q3_K_L.gguf) | i1-Q3_K_L | 29.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-IQ4_XS.gguf) | i1-IQ4_XS | 29.9 | | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-Q4_0.gguf) | i1-Q4_0 | 31.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-Q4_K_S.gguf) | i1-Q4_K_S | 31.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-Q4_K_M.gguf) | i1-Q4_K_M | 33.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-Q5_K_S.gguf) | i1-Q5_K_S | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-Q5_K_M.gguf) | i1-Q5_K_M | 39.4 | | | [GGUF](https://huggingface.co/mradermacher/SG-Raccoon-Yi-55B-200k-i1-GGUF/resolve/main/SG-Raccoon-Yi-55B-200k.i1-Q6_K.gguf) | i1-Q6_K | 45.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Unichat-llama3-Chinese-8B-GGUF
mradermacher
2024-05-06T04:35:44Z
38
3
transformers
[ "transformers", "gguf", "en", "zh", "base_model:UnicomLLM/Unichat-llama3-Chinese-8B", "base_model:quantized:UnicomLLM/Unichat-llama3-Chinese-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-23T05:44:50Z
--- base_model: UnicomLLM/Unichat-llama3-Chinese-8B language: - en - zh library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/UnicomLLM/Unichat-llama3-Chinese-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Unichat-llama3-Chinese-8B-GGUF/resolve/main/Unichat-llama3-Chinese-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/blossom-v3_1-yi-34b-GGUF
mradermacher
2024-05-06T04:35:03Z
0
0
transformers
[ "transformers", "gguf", "zh", "en", "dataset:Azure99/blossom-chat-v1", "dataset:Azure99/blossom-math-v2", "dataset:Azure99/blossom-wizard-v1", "dataset:Azure99/blossom-orca-v1", "base_model:Azure99/blossom-v3_1-yi-34b", "base_model:quantized:Azure99/blossom-v3_1-yi-34b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T04:16:56Z
--- base_model: Azure99/blossom-v3_1-yi-34b datasets: - Azure99/blossom-chat-v1 - Azure99/blossom-math-v2 - Azure99/blossom-wizard-v1 - Azure99/blossom-orca-v1 language: - zh - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Azure99/blossom-v3_1-yi-34b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q3_K_S.gguf) | Q3_K_S | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.IQ3_M.gguf) | IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q3_K_L.gguf) | Q3_K_L | 18.2 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.IQ4_XS.gguf) | IQ4_XS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q5_K_S.gguf) | Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q5_K_M.gguf) | Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q6_K.gguf) | Q6_K | 28.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF/resolve/main/blossom-v3_1-yi-34b.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
HeshamHaroon/Arabic_mistral_7b
HeshamHaroon
2024-05-06T04:35:00Z
0
2
transformers
[ "transformers", "safetensors", "unsloth", "text-generation", "ar", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T03:56:49Z
--- library_name: transformers tags: - unsloth language: - ar pipeline_tag: text-generation ---
mradermacher/blossom-v3_1-yi-34b-i1-GGUF
mradermacher
2024-05-06T04:34:33Z
187
0
transformers
[ "transformers", "gguf", "zh", "en", "dataset:Azure99/blossom-chat-v1", "dataset:Azure99/blossom-math-v2", "dataset:Azure99/blossom-wizard-v1", "dataset:Azure99/blossom-orca-v1", "base_model:Azure99/blossom-v3_1-yi-34b", "base_model:quantized:Azure99/blossom-v3_1-yi-34b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T13:09:06Z
--- base_model: Azure99/blossom-v3_1-yi-34b datasets: - Azure99/blossom-chat-v1 - Azure99/blossom-math-v2 - Azure99/blossom-wizard-v1 - Azure99/blossom-orca-v1 language: - zh - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/Azure99/blossom-v3_1-yi-34b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ1_S.gguf) | i1-IQ1_S | 7.6 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ1_M.gguf) | i1-IQ1_M | 8.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ2_S.gguf) | i1-IQ2_S | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ2_M.gguf) | i1-IQ2_M | 11.9 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.1 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ3_S.gguf) | i1-IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 18.6 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q4_0.gguf) | i1-Q4_0 | 19.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/blossom-v3_1-yi-34b-i1-GGUF/resolve/main/blossom-v3_1-yi-34b.i1-Q6_K.gguf) | i1-Q6_K | 28.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
nuebaek/komt_mistral_smilestyle_v2
nuebaek
2024-05-06T04:34:21Z
76
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-06T04:31:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/llama-3-8b-English-to-Hinglish-GGUF
mradermacher
2024-05-06T04:33:34Z
29
0
transformers
[ "transformers", "gguf", "unsloth", "trl", "sft", "en", "base_model:4-alokk/llama-3-8b-English-to-Hinglish", "base_model:quantized:4-alokk/llama-3-8b-English-to-Hinglish", "endpoints_compatible", "region:us" ]
null
2024-04-25T12:57:41Z
--- base_model: 4-alokk/llama-3-8b-English-to-Hinglish language: - en library_name: transformers quantized_by: mradermacher tags: - unsloth - trl - sft --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/4-alokk/llama-3-8b-English-to-Hinglish <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-English-to-Hinglish-GGUF/resolve/main/llama-3-8b-English-to-Hinglish.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Myrrh_solar_10.7b_3.0-GGUF
mradermacher
2024-05-06T04:33:00Z
8
0
transformers
[ "transformers", "gguf", "ko", "base_model:MoaData/Myrrh_solar_10.7b_3.0", "base_model:quantized:MoaData/Myrrh_solar_10.7b_3.0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T09:40:57Z
--- base_model: MoaData/Myrrh_solar_10.7b_3.0 language: - ko library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/MoaData/Myrrh_solar_10.7b_3.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Flirty-Mistral-GGUF
mradermacher
2024-05-06T04:32:57Z
125
0
transformers
[ "transformers", "gguf", "LLMs", "NLP", "Vietnamese", "vi", "dataset:Tamnemtf/Flirty", "base_model:Tamnemtf/Flirty-Mistral", "base_model:quantized:Tamnemtf/Flirty-Mistral", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-26T10:03:58Z
--- base_model: Tamnemtf/Flirty-Mistral datasets: - Tamnemtf/Flirty language: - vi library_name: transformers license: mit quantized_by: mradermacher tags: - LLMs - NLP - Vietnamese --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Tamnemtf/Flirty-Mistral <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.IQ3_XS.gguf) | IQ3_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.Q3_K_L.gguf) | Q3_K_L | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.IQ4_XS.gguf) | IQ4_XS | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.Q5_K_M.gguf) | Q5_K_M | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.Q6_K.gguf) | Q6_K | 6.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Flirty-Mistral-GGUF/resolve/main/Flirty-Mistral.f16.gguf) | f16 | 14.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Hebrew-Mistral-7B-GGUF
mradermacher
2024-05-06T04:32:46Z
307
0
transformers
[ "transformers", "gguf", "en", "he", "base_model:yam-peleg/Hebrew-Mistral-7B", "base_model:quantized:yam-peleg/Hebrew-Mistral-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T11:47:38Z
--- base_model: yam-peleg/Hebrew-Mistral-7B language: - en - he library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/yam-peleg/Hebrew-Mistral-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q4_K_M.gguf) | Q4_K_M | 4.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q6_K.gguf) | Q6_K | 6.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.Q8_0.gguf) | Q8_0 | 8.1 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Hebrew-Mistral-7B-GGUF/resolve/main/Hebrew-Mistral-7B.f16.gguf) | f16 | 15.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Qwen1.5-7B-MeChat-GGUF
mradermacher
2024-05-06T04:32:43Z
11
1
transformers
[ "transformers", "gguf", "medical", "zh", "base_model:jun10k/Qwen1.5-7B-MeChat", "base_model:quantized:jun10k/Qwen1.5-7B-MeChat", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-26T12:00:27Z
--- base_model: jun10k/Qwen1.5-7B-MeChat language: - zh library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - medical --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jun10k/Qwen1.5-7B-MeChat <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q2_K.gguf) | Q2_K | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.IQ3_XS.gguf) | IQ3_XS | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.IQ3_S.gguf) | IQ3_S | 3.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q3_K_S.gguf) | Q3_K_S | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.IQ3_M.gguf) | IQ3_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q3_K_M.gguf) | Q3_K_M | 4.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q3_K_L.gguf) | Q3_K_L | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q4_K_M.gguf) | Q4_K_M | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q5_K_S.gguf) | Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q5_K_M.gguf) | Q5_K_M | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.Q8_0.gguf) | Q8_0 | 8.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-7B-MeChat-GGUF/resolve/main/Qwen1.5-7B-MeChat.f16.gguf) | f16 | 15.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF
mradermacher
2024-05-06T04:32:03Z
79
1
transformers
[ "transformers", "gguf", "llama", "latest", "en", "zh", "dataset:teknium/OpenHermes-2.5", "base_model:yaojialzc/Gigi-Llama3-8B-Chinese-zh", "base_model:quantized:yaojialzc/Gigi-Llama3-8B-Chinese-zh", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-04-27T19:08:40Z
--- base_model: yaojialzc/Gigi-Llama3-8B-Chinese-zh datasets: - teknium/OpenHermes-2.5 language: - en - zh library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - llama - latest --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/yaojialzc/Gigi-Llama3-8B-Chinese-zh <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Gigi-Llama3-8B-Chinese-zh-GGUF/resolve/main/Gigi-Llama3-8B-Chinese-zh.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
appvoid/merging-9
appvoid
2024-05-06T04:31:31Z
138
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:appvoid/palmer-003", "base_model:finetune:appvoid/palmer-003", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T04:30:53Z
--- base_model: - appvoid/palmer-003 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: appvoid/palmer-003 layer_range: [0, 10] - sources: - model: appvoid/palmer-003 layer_range: [9, 15] - sources: - model: appvoid/palmer-003 layer_range: [14, 20] - sources: - model: appvoid/palmer-003 layer_range: [19, 21] merge_method: passthrough dtype: float16 ```
mradermacher/Megac4ai-command-r-plus-i1-GGUF
mradermacher
2024-05-06T04:30:26Z
4
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "base_model:nitky/Megac4ai-command-r-plus", "base_model:quantized:nitky/Megac4ai-command-r-plus", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-05-01T22:14:34Z
--- base_model: nitky/Megac4ai-command-r-plus language: - en - fr - de - es - it - pt - ja - ko - zh - ar library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/nitky/Megac4ai-command-r-plus <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Megac4ai-command-r-plus-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ1_S.gguf) | i1-IQ1_S | 35.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ1_M.gguf) | i1-IQ1_M | 38.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 43.6 | | | [GGUF](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ2_XS.gguf) | i1-IQ2_XS | 48.3 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ2_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ2_S.gguf.part2of2) | i1-IQ2_S | 50.9 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ2_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ2_M.gguf.part2of2) | i1-IQ2_M | 55.2 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q2_K.gguf.part2of2) | i1-Q2_K | 60.4 | IQ3_XXS probably better | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ3_XXS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ3_XXS.gguf.part2of2) | i1-IQ3_XXS | 62.4 | lower quality | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 66.8 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 70.3 | IQ3_XS probably better | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 70.5 | beats Q3_K* | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 73.1 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 78.3 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 85.2 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 86.5 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 91.5 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 91.8 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 96.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q5_K_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q5_K_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q5_K_S.gguf.part3of3) | i1-Q5_K_S | 110.8 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q5_K_M.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q5_K_M.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q5_K_M.gguf.part3of3) | i1-Q5_K_M | 113.7 | | | [PART 1](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Megac4ai-command-r-plus-i1-GGUF/resolve/main/Megac4ai-command-r-plus.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 131.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Alphallama3-8B-GGUF
mradermacher
2024-05-06T04:30:19Z
3
1
transformers
[ "transformers", "gguf", "ko", "dataset:Custom_datasets", "base_model:Alphacode-AI/Alphallama3-8B", "base_model:quantized:Alphacode-AI/Alphallama3-8B", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-02T01:16:41Z
--- base_model: Alphacode-AI/Alphallama3-8B datasets: - Custom_datasets language: - ko library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Alphacode-AI/Alphallama3-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF
mradermacher
2024-05-06T04:30:03Z
118
0
transformers
[ "transformers", "gguf", "tr", "dataset:aerdincdal/CBDDO-LLM-DB-V1", "base_model:aerdincdal/CBDDO-LLM-8B-Instruct-v1", "base_model:quantized:aerdincdal/CBDDO-LLM-8B-Instruct-v1", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-02T11:37:44Z
--- base_model: aerdincdal/CBDDO-LLM-8B-Instruct-v1 datasets: - aerdincdal/CBDDO-LLM-DB-V1 language: - tr library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/aerdincdal/CBDDO-LLM-8B-Instruct-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/CBDDO-LLM-8B-Instruct-v1-GGUF/resolve/main/CBDDO-LLM-8B-Instruct-v1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Llama-3-8b-Ita-GGUF
mradermacher
2024-05-06T04:29:33Z
26
1
transformers
[ "transformers", "gguf", "it", "en", "dataset:DeepMount00/llm_ita_ultra", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:quantized:DeepMount00/Llama-3-8b-Ita", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-03T01:27:19Z
--- base_model: DeepMount00/Llama-3-8b-Ita datasets: - DeepMount00/llm_ita_ultra language: - it - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> static quants of https://huggingface.co/DeepMount00/Llama-3-8b-Ita <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8b-Ita-GGUF/resolve/main/Llama-3-8b-Ita.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
appvoid/merging-8
appvoid
2024-05-06T04:28:58Z
138
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:appvoid/palmer-003", "base_model:finetune:appvoid/palmer-003", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T04:28:13Z
--- base_model: - appvoid/palmer-003 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: appvoid/palmer-003 layer_range: [0, 10] - sources: - model: appvoid/palmer-003 layer_range: [8, 15] - sources: - model: appvoid/palmer-003 layer_range: [13, 20] - sources: - model: appvoid/palmer-003 layer_range: [16, 21] merge_method: passthrough dtype: float16 ```
mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF
mradermacher
2024-05-06T04:28:21Z
7
0
transformers
[ "transformers", "gguf", "en", "ja", "base_model:HachiML/Swallow-MS-7b-instruct-v0.1", "base_model:quantized:HachiML/Swallow-MS-7b-instruct-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-05T12:13:20Z
--- base_model: HachiML/Swallow-MS-7b-instruct-v0.1 language: - en - ja library_name: transformers license: apache-2.0 model_type: mistral quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/HachiML/Swallow-MS-7b-instruct-v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.Q2_K.gguf) | Q2_K | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.Q6_K.gguf) | Q6_K | 6.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Swallow-MS-7b-instruct-v0.1-GGUF/resolve/main/Swallow-MS-7b-instruct-v0.1.f16.gguf) | f16 | 14.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/spring-chicken-8x8b-GGUF
mradermacher
2024-05-06T04:28:19Z
19
1
transformers
[ "transformers", "gguf", "llama-3", "en", "base_model:maldv/spring-chicken-8x8b", "base_model:quantized:maldv/spring-chicken-8x8b", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-05T12:21:18Z
--- base_model: maldv/spring-chicken-8x8b language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - llama-3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> static quants of https://huggingface.co/maldv/spring-chicken-8x8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.Q2_K.gguf) | Q2_K | 17.9 | | | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.IQ3_XS.gguf) | IQ3_XS | 19.9 | | | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.IQ3_S.gguf) | IQ3_S | 21.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.Q3_K_S.gguf) | Q3_K_S | 21.0 | | | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.IQ3_M.gguf) | IQ3_M | 22.0 | | | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.Q3_K_M.gguf) | Q3_K_M | 23.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.Q3_K_L.gguf) | Q3_K_L | 24.8 | | | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.IQ4_XS.gguf) | IQ4_XS | 26.0 | | | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.Q4_K_S.gguf) | Q4_K_S | 27.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.Q4_K_M.gguf) | Q4_K_M | 29.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.Q5_K_S.gguf) | Q5_K_S | 32.9 | | | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.Q5_K_M.gguf) | Q5_K_M | 33.9 | | | [GGUF](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.Q6_K.gguf) | Q6_K | 39.1 | very good quality | | [PART 1](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF/resolve/main/spring-chicken-8x8b.Q8_0.gguf.part2of2) | Q8_0 | 50.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF
mradermacher
2024-05-06T04:28:10Z
43
0
transformers
[ "transformers", "gguf", "en", "base_model:migtissera/Tess-2.0-Llama-3-8B", "base_model:quantized:migtissera/Tess-2.0-Llama-3-8B", "license:llama3", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-05-05T14:21:45Z
--- base_model: migtissera/Tess-2.0-Llama-3-8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/migtissera/Tess-2.0-Llama-3-8B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Tess-2.0-Llama-3-8B-i1-GGUF/resolve/main/Tess-2.0-Llama-3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF
mradermacher
2024-05-06T04:28:06Z
50
0
transformers
[ "transformers", "gguf", "en", "base_model:elyn-dev/Llama-3-Soliloquy-Max-70B-v1", "base_model:quantized:elyn-dev/Llama-3-Soliloquy-Max-70B-v1", "license:llama3", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-05-05T16:18:47Z
--- base_model: openlynn/Llama-3-Soliloquy-Max-70B-v1 language: - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/openlynn/Llama-3-Soliloquy-Max-70B-v1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Soliloquy-Max-70B-v1-i1-GGUF/resolve/main/Llama-3-Soliloquy-Max-70B-v1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
netcat420/MFANNv0.8-GGUF-DEFUNCT
netcat420
2024-05-06T04:22:49Z
6
0
null
[ "gguf", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-06T03:07:39Z
--- license: apache-2.0 --- unfortunately testing on this release has determined it to be defunct due to the multi-lingual base model that was used, i tried to experiment with it and it ultimately failed, worth a shot though :)
mradermacher/Llama-3-8B-Instruct-norefusal-GGUF
mradermacher
2024-05-06T04:20:55Z
9
0
transformers
[ "transformers", "gguf", "en", "base_model:theo77186/Llama-3-8B-Instruct-norefusal", "base_model:quantized:theo77186/Llama-3-8B-Instruct-norefusal", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-06T03:52:40Z
--- base_model: theo77186/Llama-3-8B-Instruct-norefusal language: - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> static quants of https://huggingface.co/theo77186/Llama-3-8B-Instruct-norefusal <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-norefusal-GGUF/resolve/main/Llama-3-8B-Instruct-norefusal.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
shtapm/whisper-large_0502_decoder_0_4_200steps
shtapm
2024-05-06T04:20:51Z
146
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-06T04:15:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
appvoid/merging-4
appvoid
2024-05-06T04:17:38Z
139
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:appvoid/palmer-003", "base_model:finetune:appvoid/palmer-003", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T04:16:58Z
--- base_model: - appvoid/palmer-003 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: appvoid/palmer-003 layer_range: [0, 8] - sources: - model: appvoid/palmer-003 layer_range: [6, 12] - sources: - model: appvoid/palmer-003 layer_range: [10, 21] merge_method: passthrough dtype: float16 ```
appvoid/merging-3
appvoid
2024-05-06T04:15:12Z
138
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:appvoid/palmer-003", "base_model:finetune:appvoid/palmer-003", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T04:14:34Z
--- base_model: - appvoid/palmer-003 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: appvoid/palmer-003 layer_range: [0, 8] - sources: - model: appvoid/palmer-003 layer_range: [6, 21] merge_method: passthrough dtype: float16 ```
danicrag/500WORDSESSAY
danicrag
2024-05-06T04:14:18Z
0
0
null
[ "region:us" ]
null
2024-05-06T04:13:12Z
London, a city known for its historical landmarks, cultural diversity, and vibrant academic institutions, is also a hub for students seeking academic excellence. Amidst the hustle and bustle of city life, students often find themselves navigating through a plethora of academic tasks, with essay writing being a significant component of their academic journey. In this bustling metropolis, the demand for essay writing services has surged, offering students the support they need to excel in their academic endeavors. **Unparalleled Expertis** Essay writing services in London boast a team of expert writers who possess unparalleled expertise in various fields of study. These writers, often holding advanced degrees from prestigious universities in the UK, understand the nuances of academic writing [essay writing service uk](https://500wordsessay.com/) and are well-versed in crafting compelling essays tailored to meet the unique requirements of each assignment. Whether it's a literature review, a research paper, or a critical analysis, these professionals deliver top-notch quality that adheres to academic standards and exceeds expectations. **Customized Approach** One of the key advantages of essay writing services in London is their commitment to a customized approach. Recognizing that every student has distinct needs and preferences, these services offer personalized assistance to ensure maximum satisfaction. From initial consultation to final delivery, students have the opportunity to collaborate with writers, providing input and feedback throughout the writing process. This collaborative approach not only ensures that the final product aligns with the student's vision but also fosters a deeper understanding of the subject matter. **Timely Delivery** In the fast-paced environment of academic life, deadlines are non-negotiable. Essay writing services in London understand the importance of timely delivery and prioritize punctuality in their operations. Whether the deadline is looming or distant, these services employ efficient workflows and rigorous quality control measures to ensure that essays are completed and delivered on time, allowing students to meet their academic obligations [Essay Writing Service London](https://500wordsessay.com/essay-writing-service-london/) without compromising on quality. **Plagiarism-Free Content** Originality is paramount in academic writing, and essay writing services in London uphold the highest standards of academic integrity. Each essay is meticulously researched, written from scratch, and subjected to rigorous plagiarism checks to ensure its authenticity. By employing advanced plagiarism detection software and thorough editorial reviews, these services guarantee that every essay is free from any form of plagiarism, giving students the confidence to submit their work with pride. **Confidentiality and Security** Privacy and confidentiality are of utmost importance when availing oneself of essay writing services, and reputable providers in London prioritize the security of their clients' information. From personal details to payment transactions, stringent measures are in place to safeguard sensitive data and uphold client confidentiality. Students can rest assured that their identities remain anonymous, and their interactions with the service are kept strictly confidential, providing peace of mind throughout the engagement. **Conclusion** Essay writing services in London serve as invaluable resources for students navigating the rigors of academic life in one of the world's most dynamic cities. With their expert writers, customized approach, timely delivery, commitment to originality, and emphasis on confidentiality, these services empower students to achieve academic success while balancing their myriad responsibilities. By harnessing the support and expertise of these services, students can navigate the [500 Words Essay About Work Immersion](https://500wordsessay.com/500-words-essay/500-words-essay-about-work-immersion/) complexities of essay writing with confidence and embark on a journey towards academic excellence.
APLunch/poca-SoccerTwos-1
APLunch
2024-05-06T04:09:25Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SoccerTwos", "ML-Agents-SoccerTwos", "deep-reinforcement-learning", "reinforcement-learning", "region:us" ]
reinforcement-learning
2024-05-06T03:49:57Z
--- library_name: ml-agents tags: - SoccerTwos - ML-Agents-SoccerTwos - deep-reinforcement-learning - reinforcement-learning --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: APLunch/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
saffin/vit_ivi_first_test
saffin
2024-05-06T04:09:06Z
3
0
transformers
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-04-12T02:44:08Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_keras_callback model-index: - name: saffin/vit_ivi_first_test results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # saffin/vit_ivi_first_test This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2158 - Train Sparse Categorical Accuracy: 1.0 - Validation Loss: 0.2144 - Validation Sparse Categorical Accuracy: 1.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1525, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 1.2381 | 0.8426 | 0.8788 | 1.0 | 0 | | 0.6525 | 1.0 | 0.5058 | 1.0 | 1 | | 0.3859 | 1.0 | 0.3354 | 1.0 | 2 | | 0.2715 | 1.0 | 0.2602 | 1.0 | 3 | | 0.2158 | 1.0 | 0.2144 | 1.0 | 4 | ### Framework versions - Transformers 4.31.0 - TensorFlow 2.8.0 - Datasets 2.18.0 - Tokenizers 0.13.3
FallenMerick/Smart-Lemon-Cookie-7B-GGUF
FallenMerick
2024-05-06T04:05:11Z
118
1
null
[ "gguf", "quantized", "4-bit", "6-bit", "8-bit", "GGUF", "merge", "mistral", "text-generation", "base_model:FallenMerick/Smart-Lemon-Cookie-7B", "base_model:quantized:FallenMerick/Smart-Lemon-Cookie-7B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-04-30T04:59:51Z
--- base_model: - FallenMerick/Smart-Lemon-Cookie-7B model_name: Smart-Lemon-Cookie-7B model_type: mistral pipeline_tag: text-generation tags: - quantized - 4-bit - 6-bit - 8-bit - GGUF - merge - mistral - text-generation --- # Smart-Lemon-Cookie-7B These are GGUF quants for the following model: https://huggingface.co/FallenMerick/Smart-Lemon-Cookie-7B
ytcheng/llama-3-8b-hf-sm-lora-merged
ytcheng
2024-05-06T04:02:23Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T03:58:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Xino/olive-whisper-onnx
Xino
2024-05-06T03:58:22Z
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2024-05-05T14:31:11Z
--- license: apache-2.0 ---
nes470/new-attempt-pipeline
nes470
2024-05-06T03:55:42Z
128
0
transformers
[ "transformers", "pytorch", "safetensors", "QA-umd-quizbowl", "question-answering", "custom_code", "arxiv:1910.09700", "region:us" ]
question-answering
2024-05-06T03:25:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thinh-huynh-re/ppo-LunarLander-v2
thinh-huynh-re
2024-05-06T03:54:48Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-06T03:54:30Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 254.53 +/- 12.63 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DUAL-GPO/zephyr-7b-gpo-v1-i1
DUAL-GPO
2024-05-06T03:44:46Z
3
0
peft
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-05-05T07:57:19Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized base_model: mistralai/Mistral-7B-v0.1 model-index: - name: zephyr-7b-gpo-v1-i1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-gpo-v1-i1 This model is a fine-tuned version of [DUAL-GPO/zephyr-7b-gpo-update3-i0](https://huggingface.co/DUAL-GPO/zephyr-7b-gpo-update3-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.14.6 - Tokenizers 0.15.2
shtapm/whisper-large_0502_decoder_28_32_200steps
shtapm
2024-05-06T03:44:24Z
146
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-06T03:40:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gaduhhartawan/indobart-base-v2
gaduhhartawan
2024-05-06T03:44:03Z
191
2
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "summarization", "id", "dataset:id_liputan6", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2024-05-06T02:10:56Z
--- license: mit datasets: - id_liputan6 language: - id metrics: - rouge pipeline_tag: summarization ---
zhongyw/model
zhongyw
2024-05-06T03:41:57Z
11
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-06T03:39:19Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** zhongyw - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ShenaoZ/0.0001_zephyrdpoinit_nodpo_3iters_bs256_555lr_iter_1
ShenaoZ
2024-05-06T03:36:13Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:finetune:HuggingFaceH4/zephyr-7b-beta", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T02:07:15Z
--- license: mit base_model: HuggingFaceH4/zephyr-7b-beta tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: 0.0001_zephyrdpoinit_nodpo_3iters_bs256_555lr_iter_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_zephyrdpoinit_nodpo_3iters_bs256_555lr_iter_1 This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF
mradermacher
2024-05-06T03:35:48Z
72
4
transformers
[ "transformers", "gguf", "moe", "en", "base_model:xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B", "base_model:quantized:xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-05T21:12:10Z
--- base_model: xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - moe --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> static quants of https://huggingface.co/xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.Q2_K.gguf) | Q2_K | 9.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.IQ3_XS.gguf) | IQ3_XS | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.Q3_K_S.gguf) | Q3_K_S | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.IQ3_S.gguf) | IQ3_S | 11.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.IQ3_M.gguf) | IQ3_M | 11.2 | | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.Q3_K_M.gguf) | Q3_K_M | 12.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.Q3_K_L.gguf) | Q3_K_L | 13.1 | | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.IQ4_XS.gguf) | IQ4_XS | 13.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.Q4_K_S.gguf) | Q4_K_S | 14.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.Q4_K_M.gguf) | Q4_K_M | 15.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.Q5_K_S.gguf) | Q5_K_S | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.Q5_K_M.gguf) | Q5_K_M | 17.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.Q6_K.gguf) | Q6_K | 20.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-ChaoticSoliloquy-v1.5-4x8B-GGUF/resolve/main/L3-ChaoticSoliloquy-v1.5-4x8B.Q8_0.gguf) | Q8_0 | 26.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
shtapm/whisper-large_0502_decoder_24_28_200steps
shtapm
2024-05-06T03:17:42Z
149
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-06T03:13:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dshvadskiy/leagaleasy-llama-3-instruct-v1
dshvadskiy
2024-05-06T03:17:17Z
5
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-05-06T03:15:05Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct datasets: - generator model-index: - name: leagaleasy-llama-3-instruct-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # leagaleasy-llama-3-instruct-v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
Lingrui1/ppo-LunarLander-v2
Lingrui1
2024-05-06T03:13:34Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-06T03:13:11Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: ppo results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 258.90 +/- 17.19 name: mean_reward verified: false --- # **ppo** Agent playing **LunarLander-v2** This is a trained model of a **ppo** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
baaaaaaaam/v5
baaaaaaaam
2024-05-06T03:12:48Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-06T02:41:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zhongyw/lora_model
zhongyw
2024-05-06T03:11:29Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-06T03:11:17Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-bnb-4bit --- # Uploaded model - **Developed by:** zhongyw - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ShenaoZ/0.00001_withdpo_4iters_bs256_555lr_iter_4
ShenaoZ
2024-05-06T03:09:41Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.00001_withdpo_4iters_bs256_555lr_iter_3", "base_model:finetune:ShenaoZ/0.00001_withdpo_4iters_bs256_555lr_iter_3", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T02:08:54Z
--- license: mit base_model: ShenaoZ/0.00001_withdpo_4iters_bs256_555lr_iter_3 tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: 0.00001_withdpo_4iters_bs256_555lr_iter_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.00001_withdpo_4iters_bs256_555lr_iter_4 This model is a fine-tuned version of [ShenaoZ/0.00001_withdpo_4iters_bs256_555lr_iter_3](https://huggingface.co/ShenaoZ/0.00001_withdpo_4iters_bs256_555lr_iter_3) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
mikhail-panzo/zlm_b128_le4_s4000
mikhail-panzo
2024-05-06T03:08:58Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-04-28T01:30:44Z
--- license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer model-index: - name: zlm_b128_le4_s4000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zlm_b128_le4_s4000 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3305 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5173 | 0.8377 | 500 | 0.4566 | | 0.455 | 1.6754 | 1000 | 0.4031 | | 0.4175 | 2.5131 | 1500 | 0.3778 | | 0.4022 | 3.3508 | 2000 | 0.3678 | | 0.3848 | 4.1885 | 2500 | 0.3523 | | 0.3763 | 5.0262 | 3000 | 0.3426 | | 0.3665 | 5.8639 | 3500 | 0.3398 | | 0.3642 | 6.7016 | 4000 | 0.3305 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
Holarissun/RM-TLDR_human_loraR64_-1_gemma2b_lr1.41e-05_bs2_g4
Holarissun
2024-05-06T03:07:11Z
0
0
peft
[ "peft", "safetensors", "trl", "reward-trainer", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-06T03:07:08Z
--- license: gemma library_name: peft tags: - trl - reward-trainer - generated_from_trainer metrics: - accuracy base_model: google/gemma-2b model-index: - name: RM-TLDR_human_loraR64_-1_gemma2b_lr1.41e-05_bs2_g4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RM-TLDR_human_loraR64_-1_gemma2b_lr1.41e-05_bs2_g4 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6086 - Accuracy: 0.6816 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5634 | 1.0 | 11168 | 0.6055 | 0.6788 | | 0.5285 | 2.0 | 22336 | 0.6086 | 0.6816 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
eduagarcia/RoBERTaLexPT-base
eduagarcia
2024-05-06T02:56:42Z
314
16
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "legal", "pt", "dataset:eduagarcia/LegalPT_dedup", "dataset:eduagarcia/CrawlPT_dedup", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-01-09T19:00:57Z
--- datasets: - eduagarcia/LegalPT_dedup - eduagarcia/CrawlPT_dedup language: - pt pipeline_tag: fill-mask tags: - legal model-index: - name: RoBERTaLexPT-base results: - task: type: token-classification dataset: type: lener_br name: lener_br split: test metrics: - type: seqeval value: 0.9073 name: F1 args: scheme: IOB2 - task: type: token-classification dataset: type: eduagarcia/PortuLex_benchmark name: UlyNER-PL Coarse config: UlyssesNER-Br-PL-coarse split: test metrics: - type: seqeval value: 0.8856 name: F1 args: scheme: IOB2 - task: type: token-classification dataset: type: eduagarcia/PortuLex_benchmark name: UlyNER-PL Fine config: UlyssesNER-Br-PL-fine split: test metrics: - type: seqeval value: 0.8603 name: F1 args: scheme: IOB2 - task: type: token-classification dataset: type: eduagarcia/PortuLex_benchmark name: FGV-STF config: fgv-coarse split: test metrics: - type: seqeval value: 0.8040 name: F1 args: scheme: IOB2 - task: type: token-classification dataset: type: eduagarcia/PortuLex_benchmark name: RRIP config: rrip split: test metrics: - type: seqeval value: 0.8322 name: F1 args: scheme: IOB2 - task: type: token-classification dataset: type: eduagarcia/PortuLex_benchmark name: PortuLex split: test metrics: - type: seqeval value: 0.8541 name: Average F1 args: scheme: IOB2 license: cc-by-4.0 metrics: - seqeval --- # RoBERTaLexPT-base RoBERTaLexPT-base is a Portuguese Masked Language Model pretrained from scratch from the [LegalPT](https://huggingface.co/datasets/eduagarcia/LegalPT_dedup) and [CrawlPT](https://huggingface.co/datasets/eduagarcia/CrawlPT_dedup) corpora, using the same architecture as [RoBERTa-base](https://huggingface.co/FacebookAI/roberta-base), introduced by Liu et al. (2019). - **Language(s) (NLP):** Portuguese (pt-BR and pt-PT) - **License:** [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/deed.en) - **Repository:** https://github.com/eduagarcia/roberta-legal-portuguese - **Paper:** https://aclanthology.org/2024.propor-1.38/ ## Evaluation The model was evaluated on ["PortuLex" benchmark](https://huggingface.co/datasets/eduagarcia/PortuLex_benchmark), a four-task benchmark designed to evaluate the quality and performance of language models in the Portuguese legal domain. Macro F1-Score (\%) for multiple models evaluated on PortuLex benchmark test splits: | **Model** | **LeNER** | **UlyNER-PL** | **FGV-STF** | **RRIP** | **Average (%)** | |----------------------------------------------------------------------------|-----------|-----------------|-------------|:---------:|-----------------| | | | Coarse/Fine | Coarse | | | | [BERTimbau-base](https://huggingface.co/neuralmind/bert-base-portuguese-cased) | 88.34 | 86.39/83.83 | 79.34 | 82.34 | 83.78 | | [BERTimbau-large](https://huggingface.co/neuralmind/bert-large-portuguese-cased) | 88.64 | 87.77/84.74 | 79.71 | **83.79** | 84.60 | | [Albertina-PT-BR-base](https://huggingface.co/PORTULAN/albertina-ptbr-based) | 89.26 | 86.35/84.63 | 79.30 | 81.16 | 83.80 | | [Albertina-PT-BR-xlarge](https://huggingface.co/PORTULAN/albertina-ptbr) | 90.09 | 88.36/**86.62** | 79.94 | 82.79 | 85.08 | | [BERTikal-base](https://huggingface.co/felipemaiapolo/legalnlp-bert) | 83.68 | 79.21/75.70 | 77.73 | 81.11 | 79.99 | | [JurisBERT-base](https://huggingface.co/alfaneo/jurisbert-base-portuguese-uncased) | 81.74 | 81.67/77.97 | 76.04 | 80.85 | 79.61 | | [BERTimbauLAW-base](https://huggingface.co/alfaneo/bertimbaulaw-base-portuguese-cased) | 84.90 | 87.11/84.42 | 79.78 | 82.35 | 83.20 | | [Legal-XLM-R-base](https://huggingface.co/joelniklaus/legal-xlm-roberta-base) | 87.48 | 83.49/83.16 | 79.79 | 82.35 | 83.24 | | [Legal-XLM-R-large](https://huggingface.co/joelniklaus/legal-xlm-roberta-large) | 88.39 | 84.65/84.55 | 79.36 | 81.66 | 83.50 | | [Legal-RoBERTa-PT-large](https://huggingface.co/joelniklaus/legal-portuguese-roberta-large) | 87.96 | 88.32/84.83 | 79.57 | 81.98 | 84.02 | | **Ours** | | | | | | | RoBERTaTimbau-base (Reproduction of BERTimbau) | 89.68 | 87.53/85.74 | 78.82 | 82.03 | 84.29 | | RoBERTaLegalPT-base (Trained on LegalPT) | 90.59 | 85.45/84.40 | 79.92 | 82.84 | 84.57 | | [RoBERTaCrawlPT-base](https://huggingface.co/eduagarcia/RoBERTaCrawlPT-base) (Trained on CrawlPT) | 89.24 | 88.22/86.58 | 79.88 | 82.80 | 84.83 | | **RoBERTaLexPT-base (this)** (Trained on CrawlPT + LegalPT) | **90.73** | **88.56**/86.03 | **80.40** | 83.22 | **85.41** | In summary, RoBERTaLexPT consistently achieves top legal NLP effectiveness despite its base size. With sufficient pre-training data, it can surpass larger models. The results highlight the importance of domain-diverse training data over sheer model scale. ## Training Details RoBERTaLexPT-base is pretrained on: - [LegalPT](https://huggingface.co/datasets/eduagarcia/LegalPT_dedup) is a Portuguese legal corpus by aggregating diverse sources of up to 125GiB data. - [CrawlPT](https://huggingface.co/datasets/eduagarcia/CrawlPT_dedup) is a composition of three Portuguese general corpora: [brWaC](https://huggingface.co/datasets/brwac), [CC100 PT subset](https://huggingface.co/datasets/eduagarcia/cc100-pt), [OSCAR-2301 PT subset](https://huggingface.co/datasets/eduagarcia/OSCAR-2301-pt_dedup). ### Training Procedure Our pretraining process was executed using the [Fairseq library v0.10.2](https://github.com/facebookresearch/fairseq/tree/v0.10.2) on a DGX-A100 cluster, utilizing a total of 2 Nvidia A100 80 GB GPUs. The complete training of a single configuration takes approximately three days. This computational cost is similar to the work of [BERTimbau-base](https://huggingface.co/neuralmind/bert-base-portuguese-cased), exposing the model to approximately 65 billion tokens during training. #### Preprocessing We deduplicated all subsets of the LegalPT and CrawlPT Corpus using the a MinHash algorithm and Locality Sensitive Hashing implementation from the libary [text-dedup](https://github.com/ChenghaoMou/text-dedup) to find clusters of duplicate documents. To ensure that domain models are not constrained by a generic vocabulary, we utilized the [HuggingFace Tokenizers](https://github.com/huggingface/tokenizers) -- BPE algorithm to train a vocabulary for each pre-training corpus used. #### Training Hyperparameters The pretraining process involved training the model for 62,500 steps, with a batch size of 2048 and a learning rate of 4e-4, each sequence containing a maximum of 512 tokens. The weight initialization is random. We employed the masked language modeling objective, where 15\% of the input tokens were randomly masked. The optimization was performed using the AdamW optimizer with a linear warmup and a linear decay learning rate schedule. For other parameters we adopted the standard [RoBERTa-base hyperparameters](https://huggingface.co/FacebookAI/roberta-base): | **Hyperparameter** | **RoBERTa-base** | |------------------------|-----------------:| | Number of layers | 12 | | Hidden size | 768 | | FFN inner hidden size | 3072 | | Attention heads | 12 | | Attention head size | 64 | | Dropout | 0.1 | | Attention dropout | 0.1 | | Warmup steps | 6k | | Peak learning rate | 4e-4 | | Batch size | 2048 | | Weight decay | 0.01 | | Maximum training steps | 62.5k | | Learning rate decay | Linear | | AdamW $$\epsilon$$ | 1e-6 | | AdamW $$\beta_1$$ | 0.9 | | AdamW $$\beta_2$$ | 0.98 | | Gradient clipping | 0.0 | ## Citation ``` @inproceedings{garcia-etal-2024-robertalexpt, title = "{R}o{BERT}a{L}ex{PT}: A Legal {R}o{BERT}a Model pretrained with deduplication for {P}ortuguese", author = "Garcia, Eduardo A. S. and Silva, Nadia F. F. and Siqueira, Felipe and Albuquerque, Hidelberg O. and Gomes, Juliana R. S. and Souza, Ellen and Lima, Eliomar A.", editor = "Gamallo, Pablo and Claro, Daniela and Teixeira, Ant{\'o}nio and Real, Livy and Garcia, Marcos and Oliveira, Hugo Gon{\c{c}}alo and Amaro, Raquel", booktitle = "Proceedings of the 16th International Conference on Computational Processing of Portuguese", month = mar, year = "2024", address = "Santiago de Compostela, Galicia/Spain", publisher = "Association for Computational Lingustics", url = "https://aclanthology.org/2024.propor-1.38", pages = "374--383", } ``` ## Acknowledgment This work has been supported by the AI Center of Excellence (Centro de Excelência em Inteligência Artificial – CEIA) of the Institute of Informatics at the Federal University of Goiás (INF-UFG).
ShenaoZ/0.0001_sft_nodpo_3iters_bs256_555lr_iter_1
ShenaoZ
2024-05-06T02:50:01Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "base_model:finetune:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T01:28:49Z
--- license: mit base_model: HuggingFaceH4/mistral-7b-sft-beta tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: 0.0001_sft_nodpo_3iters_bs256_555lr_iter_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_sft_nodpo_3iters_bs256_555lr_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
FredDYyy/speecht5_finetuned_vi
FredDYyy
2024-05-06T02:49:59Z
88
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "text-to-speech", "vi", "dataset:mozilla-foundation/common_voice_13_0", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-speech
2024-04-27T09:12:16Z
--- language: - vi license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 model-index: - name: SpeechT5 Finetuned Vi - FredDYyy results: [] pipeline_tag: text-to-speech --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5 Finetuned Vi - FredDYyy This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Common Voice 13 dataset. It achieves the following results on the evaluation set: - Loss: 0.4772 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5534 | 10.06 | 1000 | 0.5056 | | 0.528 | 20.13 | 2000 | 0.4843 | | 0.5119 | 30.19 | 3000 | 0.4811 | | 0.4994 | 40.25 | 4000 | 0.4772 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
anah1tbaghdassarian/wav2vec2-conformer-rope-large-960h-ft-armenian-CV17.0
anah1tbaghdassarian
2024-05-06T02:46:04Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2-conformer", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:facebook/wav2vec2-conformer-rope-large-960h-ft", "base_model:finetune:facebook/wav2vec2-conformer-rope-large-960h-ft", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-04T21:10:13Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: facebook/wav2vec2-conformer-rope-large-960h-ft datasets: - common_voice_17_0 metrics: - wer model-index: - name: wav2vec2-conformer-rope-large-960h-ft-armenian-CV17.0 results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: common_voice_17_0 type: common_voice_17_0 config: hy-AM split: None args: hy-AM metrics: - type: wer value: 0.990876791521137 name: Wer --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-conformer-rope-large-960h-ft-armenian-CV17.0 This model is a fine-tuned version of [facebook/wav2vec2-conformer-rope-large-960h-ft](https://huggingface.co/facebook/wav2vec2-conformer-rope-large-960h-ft) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 3.1627 - Wer: 0.9909 - Cer: 0.8400 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 4.2764 | 1.0 | 325 | 3.1252 | 1.0 | 0.9984 | | 2.9396 | 2.0 | 650 | 3.1627 | 0.9909 | 0.8400 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
vilm/VinaLlama2-14B
vilm
2024-05-06T02:45:19Z
36
5
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "vi", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-02T06:34:12Z
--- license: mit language: - vi --- # VinaLlama2-14B Beta GGUF Here: [VinaLlama2-14B-GGUF](https://huggingface.co/qnguyen3/14b-gguf) **Top Features**: - **Context Length**: 32,768 tokens. - **VERY GOOD** at reasoning, mathematics and creative writing. - Works with **Langchain Agent** out-of-the-box. **Known Issues** - Still a bit struggling with Vietnamese fact (Hoang Sa & Truong Sa, Historical questions). - Hallucination when reasoning. - Can't do Vi-En/En-Vi translation (yet)! Quick use: VRAM Requirement: ~20GB ```bash pip install transformers accelerate ``` ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "vilm/VinaLlama2-14B", torch_dtype='auto', device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("vilm/VinaLlama2-14B") prompt = "Một cộng một bằng mấy?" messages = [ {"role": "system", "content": "Bạn là trợ lí AI hữu ích."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=1024, eos_token_id=tokenizer.eos_token_id, temperature=0.25, ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids)[0] print(response) ```
uirev/gemma-Code-Instruct-Finetune-test
uirev
2024-05-06T02:43:26Z
137
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T02:37:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sumail/Chalice10
Sumail
2024-05-06T02:41:32Z
123
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "merge", "mergekit", "lazymergekit", "kalytm/nous-2", "GamblerOnTrain/S-9", "conversational", "base_model:kalytm/nous-2", "base_model:finetune:kalytm/nous-2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T02:39:42Z
--- tags: - merge - mergekit - lazymergekit - kalytm/nous-2 - GamblerOnTrain/S-9 base_model: - kalytm/nous-2 - GamblerOnTrain/S-9 --- # Chalice10 Chalice10 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [kalytm/nous-2](https://huggingface.co/kalytm/nous-2) * [GamblerOnTrain/S-9](https://huggingface.co/GamblerOnTrain/S-9) ## 🧩 Configuration ```yaml slices: - sources: - model: kalytm/nous-2 layer_range: [0, 24] - model: GamblerOnTrain/S-9 layer_range: [0, 24] merge_method: slerp base_model: kalytm/nous-2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Sumail/Chalice10" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ana-grassmann/bert-base-uncased-finetuned-spam
ana-grassmann
2024-05-06T02:29:17Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-03-21T16:21:26Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: bert-base-uncased-finetuned-spam-real results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-spam-real This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0342 - Accuracy: 0.9942 - F1: 0.9945 - Precision: 0.9941 - Recall: 0.9949 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.8529031222986405e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 15 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.0418 | 1.0 | 4173 | 0.0471 | 0.9877 | 0.9882 | 0.9950 | 0.9815 | | 0.0186 | 2.0 | 8346 | 0.0394 | 0.9935 | 0.9938 | 0.9938 | 0.9938 | | 0.0096 | 3.0 | 12519 | 0.0342 | 0.9942 | 0.9945 | 0.9941 | 0.9949 | | 0.0059 | 4.0 | 16692 | 0.0421 | 0.9934 | 0.9937 | 0.9958 | 0.9917 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
DUAL-GPO/zephyr-7b-gpo-log-v3-i1
DUAL-GPO
2024-05-06T02:26:55Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-05-05T10:51:58Z
--- license: apache-2.0 library_name: peft tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: zephyr-7b-gpo-log-v3-i1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-7b-gpo-log-v3-i1 This model is a fine-tuned version of [DUAL-GPO/zephyr-7b-gpo-log-i0](https://huggingface.co/DUAL-GPO/zephyr-7b-gpo-log-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
sujeethav/roberta_tiny_0
sujeethav
2024-05-06T02:22:37Z
160
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-06T02:21:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
netcat420/MFANNv0.8-DEFUNCT
netcat420
2024-05-06T02:19:13Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-classification", "en", "dataset:netcat420/MFANN", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-05T23:02:10Z
--- library_name: transformers license: llama3 datasets: - netcat420/MFANN language: - en pipeline_tag: text-classification --- MFANN 8b version 0.8 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435f27b2d0ed796668ffd8b/AMQxzEyzbEWbZF8XeX7F1.png) fine-tuned on the MFANN dataset as it stands on 5/5/2024 as it is an ever expanding dataset
amara16/t5-qa-project
amara16
2024-05-06T02:16:29Z
116
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-06T02:03:56Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anthonymg/aerito-lince-finetuned-v1
anthonymg
2024-05-06T02:14:58Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-04T22:12:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nes470/quiz-bowl-model-qa-new-attempt
nes470
2024-05-06T02:09:34Z
105
0
transformers
[ "transformers", "safetensors", "QA-umd-quizbowl", "question-answering", "custom_code", "arxiv:1910.09700", "region:us" ]
question-answering
2024-05-05T21:18:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kitopang/llama3_generative_qa_2
kitopang
2024-05-06T01:52:11Z
52
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T01:29:41Z
--- library_name: transformers license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qwertyuiop97/Bm11
qwertyuiop97
2024-05-06T01:46:48Z
3
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "region:us" ]
text-to-image
2024-05-06T01:46:43Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: Boy wearing hat output: url: images/Snapshot_20130309.JPG base_model: runwayml/stable-diffusion-v1-5 instance_prompt: null --- # bm11 <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/qwertyuiop97/Bm11/tree/main) them in the Files & versions tab.
mradermacher/CodeMaster-v1-9b-GGUF
mradermacher
2024-05-06T01:46:15Z
26
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "KingNish/CodeMaster-v1-7b", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-05-05T21:43:57Z
--- base_model: KingNish/CodeMaster-v1-9b language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - KingNish/CodeMaster-v1-7b --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/KingNish/CodeMaster-v1-9b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.Q2_K.gguf) | Q2_K | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.IQ3_XS.gguf) | IQ3_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.IQ3_S.gguf) | IQ3_S | 4.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.Q3_K_S.gguf) | Q3_K_S | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.IQ3_M.gguf) | IQ3_M | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.Q3_K_M.gguf) | Q3_K_M | 4.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.Q3_K_L.gguf) | Q3_K_L | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.IQ4_XS.gguf) | IQ4_XS | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.Q4_K_S.gguf) | Q4_K_S | 5.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.Q4_K_M.gguf) | Q4_K_M | 5.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.Q5_K_S.gguf) | Q5_K_S | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.Q5_K_M.gguf) | Q5_K_M | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.Q6_K.gguf) | Q6_K | 7.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.Q8_0.gguf) | Q8_0 | 9.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/CodeMaster-v1-9b-GGUF/resolve/main/CodeMaster-v1-9b.f16.gguf) | f16 | 18.4 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
QuantFactory/Llama-3-Open-Ko-8B-GGUF
QuantFactory
2024-05-06T01:46:11Z
149
1
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-3-ko", "text-generation", "en", "ko", "arxiv:2310.04799", "base_model:beomi/Llama-3-Open-Ko-8B", "base_model:quantized:beomi/Llama-3-Open-Ko-8B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-05T15:34:51Z
--- language: - en - ko license: other tags: - facebook - meta - pytorch - llama - llama-3 - llama-3-ko pipeline_tag: text-generation license_name: llama3 license_link: LICENSE base_model: beomi/Llama-3-Open-Ko-8B --- # beomi/Llama-3-Open-Ko-8B-GGUF - This is quantized version of [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) created using llama.cpp ## Model Details **Llama-3-Open-Ko-8B** Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B. This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts. With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer). The train was done on TPUv5e-256, with the warm support from TRC program by Google. **Note for [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)** With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview). Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models. **Meta Llama-3** Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Junbum Lee (Beomi) **Variations** Llama-3-Open-Ko comes in one size — 8B. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama-3-Open-Ko </td> <td rowspan="2" >Same as *Open-Solar-Ko Dataset </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >17.7B+ </td> <td>Jun, 2023 </td> </tr> </table> *You can find dataset list here: https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B/tree/main/corpus **Model Release Date** 2024.04.24. **Status** This is a static model trained on an offline dataset. **License** Llama3 License: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
tsavage68/Mistral2_1000_STEPS_03beta_1e6_CDPOSFT
tsavage68
2024-05-06T01:44:20Z
4
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/mistralit2_1000_STEPS_5e7_SFT", "base_model:finetune:tsavage68/mistralit2_1000_STEPS_5e7_SFT", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T01:40:30Z
--- license: apache-2.0 base_model: tsavage68/mistralit2_1000_STEPS_5e7_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: Mistral2_1000_STEPS_03beta_1e6_CDPOSFT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral2_1000_STEPS_03beta_1e6_CDPOSFT This model is a fine-tuned version of [tsavage68/mistralit2_1000_STEPS_5e7_SFT](https://huggingface.co/tsavage68/mistralit2_1000_STEPS_5e7_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9147 - Rewards/chosen: 0.2853 - Rewards/rejected: 0.2117 - Rewards/accuracies: 0.4637 - Rewards/margins: 0.0736 - Logps/rejected: -76.8158 - Logps/chosen: -74.5509 - Logits/rejected: -1.8957 - Logits/chosen: -1.8954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.831 | 0.0977 | 50 | 0.8025 | 0.6608 | 0.6344 | 0.4132 | 0.0264 | -75.4068 | -73.2992 | -2.0277 | -2.0274 | | 0.6868 | 0.1953 | 100 | 0.9417 | 0.1823 | 0.1932 | 0.4198 | -0.0109 | -76.8774 | -74.8943 | -2.0626 | -2.0624 | | 1.1447 | 0.2930 | 150 | 1.0449 | 0.0804 | 0.1916 | 0.4000 | -0.1112 | -76.8828 | -75.2339 | -2.0660 | -2.0660 | | 1.0588 | 0.3906 | 200 | 1.0433 | 0.5444 | 0.5437 | 0.4176 | 0.0007 | -75.7091 | -73.6874 | -1.8690 | -1.8690 | | 1.1749 | 0.4883 | 250 | 1.0509 | 0.0937 | 0.0800 | 0.3780 | 0.0138 | -77.2550 | -75.1895 | -2.7221 | -2.7221 | | 0.9602 | 0.5859 | 300 | 1.0556 | 0.6428 | 0.6497 | 0.3978 | -0.0069 | -75.3558 | -73.3592 | -2.1885 | -2.1884 | | 0.8567 | 0.6836 | 350 | 1.0032 | 0.8514 | 0.9163 | 0.4022 | -0.0649 | -74.4671 | -72.6639 | -1.9197 | -1.9197 | | 0.8145 | 0.7812 | 400 | 0.9071 | 0.7911 | 0.7180 | 0.4549 | 0.0730 | -75.1281 | -72.8651 | -2.1063 | -2.1063 | | 0.9805 | 0.8789 | 450 | 0.9092 | 1.0927 | 0.9910 | 0.4549 | 0.1017 | -74.2182 | -71.8597 | -2.3062 | -2.3062 | | 0.8022 | 0.9766 | 500 | 0.8968 | 1.2157 | 1.1916 | 0.4396 | 0.0241 | -73.5496 | -71.4498 | -1.9867 | -1.9865 | | 0.4835 | 1.0742 | 550 | 0.9087 | 0.3603 | 0.2979 | 0.4396 | 0.0624 | -76.5285 | -74.3010 | -2.4092 | -2.4089 | | 0.7127 | 1.1719 | 600 | 0.9140 | 0.2002 | 0.1567 | 0.4374 | 0.0435 | -76.9992 | -74.8348 | -2.1858 | -2.1855 | | 0.4928 | 1.2695 | 650 | 0.9377 | 0.3603 | 0.3349 | 0.4396 | 0.0253 | -76.4051 | -74.3011 | -2.0564 | -2.0560 | | 0.5228 | 1.3672 | 700 | 0.9233 | 0.3468 | 0.2928 | 0.4462 | 0.0541 | -76.5456 | -74.3459 | -1.8095 | -1.8091 | | 0.4985 | 1.4648 | 750 | 0.9155 | 0.3134 | 0.2441 | 0.4484 | 0.0693 | -76.7079 | -74.4573 | -1.9045 | -1.9041 | | 0.5495 | 1.5625 | 800 | 0.9141 | 0.2956 | 0.2238 | 0.4593 | 0.0717 | -76.7754 | -74.5168 | -1.8841 | -1.8837 | | 0.518 | 1.6602 | 850 | 0.9136 | 0.2853 | 0.2115 | 0.4637 | 0.0737 | -76.8164 | -74.5511 | -1.8972 | -1.8968 | | 0.5009 | 1.7578 | 900 | 0.9149 | 0.2859 | 0.2124 | 0.4637 | 0.0735 | -76.8134 | -74.5489 | -1.8954 | -1.8950 | | 0.4334 | 1.8555 | 950 | 0.9148 | 0.2846 | 0.2116 | 0.4659 | 0.0730 | -76.8163 | -74.5534 | -1.8957 | -1.8953 | | 0.3651 | 1.9531 | 1000 | 0.9147 | 0.2853 | 0.2117 | 0.4637 | 0.0736 | -76.8158 | -74.5509 | -1.8957 | -1.8954 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.0 - Tokenizers 0.19.1
scott156/LongT5-Base-NSPCC
scott156
2024-05-06T01:37:53Z
112
0
transformers
[ "transformers", "safetensors", "longt5", "text2text-generation", "generated_from_trainer", "base_model:google/long-t5-tglobal-base", "base_model:finetune:google/long-t5-tglobal-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-05T21:19:19Z
--- license: apache-2.0 base_model: google/long-t5-tglobal-base tags: - generated_from_trainer metrics: - rouge model-index: - name: LongT5-Base-NSPCC results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LongT5-Base-NSPCC This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7756 - Rouge1: 0.5243 - Rouge2: 0.242 - Rougel: 0.3113 - Rougelsum: 0.3122 - Gen Len: 331.8511 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:--------:| | 4.0417 | 0.9947 | 94 | 0.8455 | 0.4707 | 0.1986 | 0.2704 | 0.2718 | 303.4468 | | 1.0117 | 2.0 | 189 | 0.8058 | 0.5178 | 0.239 | 0.3066 | 0.3077 | 326.3085 | | 0.886 | 2.9947 | 283 | 0.7798 | 0.5085 | 0.2272 | 0.298 | 0.2989 | 348.7979 | | 0.805 | 4.0 | 378 | 0.7725 | 0.5194 | 0.2386 | 0.309 | 0.31 | 331.3191 | | 0.7724 | 4.9947 | 472 | 0.7749 | 0.5224 | 0.2423 | 0.3133 | 0.3147 | 333.6489 | | 0.7514 | 5.9683 | 564 | 0.7756 | 0.5243 | 0.242 | 0.3113 | 0.3122 | 331.8511 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
tsavage68/chat_1000_STEPS_03beta_1e6rate_CDPOSFT
tsavage68
2024-05-06T01:36:50Z
4
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/chat_600STEPS_1e8rate_SFT", "base_model:finetune:tsavage68/chat_600STEPS_1e8rate_SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T01:31:36Z
--- base_model: tsavage68/chat_600STEPS_1e8rate_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: chat_1000_STEPS_03beta_1e6rate_CDPOSFT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chat_1000_STEPS_03beta_1e6rate_CDPOSFT This model is a fine-tuned version of [tsavage68/chat_600STEPS_1e8rate_SFT](https://huggingface.co/tsavage68/chat_600STEPS_1e8rate_SFT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6755 - Rewards/chosen: -0.5736 - Rewards/rejected: -0.7849 - Rewards/accuracies: 0.5121 - Rewards/margins: 0.2113 - Logps/rejected: -21.4183 - Logps/chosen: -18.6666 - Logits/rejected: -0.7004 - Logits/chosen: -0.7002 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6903 | 0.0977 | 50 | 0.6898 | 0.0339 | 0.0260 | 0.4264 | 0.0078 | -18.7152 | -16.6418 | -0.6000 | -0.5999 | | 0.6568 | 0.1953 | 100 | 0.6714 | -0.1082 | -0.1762 | 0.5099 | 0.0680 | -19.3893 | -17.1151 | -0.6152 | -0.6151 | | 0.7127 | 0.2930 | 150 | 0.6820 | -0.1152 | -0.1845 | 0.4879 | 0.0693 | -19.4168 | -17.1385 | -0.5988 | -0.5986 | | 0.7008 | 0.3906 | 200 | 0.6810 | -0.1658 | -0.2536 | 0.5055 | 0.0878 | -19.6473 | -17.3074 | -0.5830 | -0.5828 | | 0.7256 | 0.4883 | 250 | 0.6858 | -0.0964 | -0.2054 | 0.4923 | 0.1090 | -19.4867 | -17.0761 | -0.5766 | -0.5764 | | 0.6817 | 0.5859 | 300 | 0.6762 | -0.2368 | -0.3883 | 0.5187 | 0.1515 | -20.0964 | -17.5440 | -0.6063 | -0.6061 | | 0.6486 | 0.6836 | 350 | 0.6850 | -0.3387 | -0.4688 | 0.5055 | 0.1301 | -20.3646 | -17.8836 | -0.5899 | -0.5897 | | 0.651 | 0.7812 | 400 | 0.6734 | -0.3143 | -0.4779 | 0.5275 | 0.1636 | -20.3950 | -17.8025 | -0.6197 | -0.6195 | | 0.6761 | 0.8789 | 450 | 0.6825 | -0.1942 | -0.3362 | 0.5011 | 0.1420 | -19.9226 | -17.4020 | -0.5790 | -0.5788 | | 0.6615 | 0.9766 | 500 | 0.6798 | -0.2233 | -0.3810 | 0.4967 | 0.1578 | -20.0720 | -17.4988 | -0.6050 | -0.6048 | | 0.3298 | 1.0742 | 550 | 0.6743 | -0.2860 | -0.4658 | 0.5055 | 0.1798 | -20.3546 | -17.7080 | -0.6296 | -0.6294 | | 0.3296 | 1.1719 | 600 | 0.6753 | -0.4100 | -0.5995 | 0.5099 | 0.1894 | -20.8002 | -18.1215 | -0.6547 | -0.6545 | | 0.3571 | 1.2695 | 650 | 0.6753 | -0.4787 | -0.6784 | 0.5143 | 0.1998 | -21.0634 | -18.3502 | -0.6784 | -0.6782 | | 0.254 | 1.3672 | 700 | 0.6750 | -0.5165 | -0.7231 | 0.5099 | 0.2066 | -21.2124 | -18.4763 | -0.6901 | -0.6899 | | 0.2391 | 1.4648 | 750 | 0.6754 | -0.5562 | -0.7657 | 0.5187 | 0.2095 | -21.3543 | -18.6087 | -0.6964 | -0.6962 | | 0.3665 | 1.5625 | 800 | 0.6750 | -0.5607 | -0.7724 | 0.5055 | 0.2117 | -21.3766 | -18.6235 | -0.6992 | -0.6990 | | 0.315 | 1.6602 | 850 | 0.6758 | -0.5717 | -0.7824 | 0.5077 | 0.2106 | -21.4099 | -18.6604 | -0.7006 | -0.7004 | | 0.3595 | 1.7578 | 900 | 0.6761 | -0.5738 | -0.7840 | 0.5077 | 0.2101 | -21.4152 | -18.6674 | -0.7007 | -0.7005 | | 0.3196 | 1.8555 | 950 | 0.6747 | -0.5736 | -0.7866 | 0.5077 | 0.2130 | -21.4241 | -18.6667 | -0.7012 | -0.7010 | | 0.2841 | 1.9531 | 1000 | 0.6755 | -0.5736 | -0.7849 | 0.5121 | 0.2113 | -21.4183 | -18.6666 | -0.7004 | -0.7002 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.0 - Tokenizers 0.19.1
nes470/quiz-bowl-model-qa
nes470
2024-05-06T01:34:39Z
113
0
transformers
[ "transformers", "pytorch", "safetensors", "QA-umd-quizbowl", "question-answering", "custom_code", "arxiv:1910.09700", "region:us" ]
question-answering
2024-05-05T19:41:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DeeOrion/distilbert-base-uncased-finetuned-emotion
DeeOrion
2024-05-06T01:33:43Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-06T01:16:34Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.922 - name: F1 type: f1 value: 0.9221117001369811 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2314 - Accuracy: 0.922 - F1: 0.9221 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8608 | 1.0 | 250 | 0.3385 | 0.901 | 0.9000 | | 0.2588 | 2.0 | 500 | 0.2314 | 0.922 | 0.9221 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
sally9805/bert-base-uncased-finetuned-news-2000-2004
sally9805
2024-05-06T01:28:44Z
26
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "fill-mask", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-05T09:20:19Z
--- license: apache-2.0 tags: - generated_from_trainer base_model: bert-base-uncased model-index: - name: bert-base-uncased-finetuned-news-2000-2004 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-news-2000-2004 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0045 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.3201 | 1.0 | 14870 | 3.0706 | | 3.2795 | 2.0 | 29740 | 3.0129 | | 3.2427 | 3.0 | 44610 | 2.9954 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
mlobo880/katex2
mlobo880
2024-05-06T01:22:33Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2024-05-06T01:20:41Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sujitb/finetuned_cl_model
sujitb
2024-05-06T01:08:09Z
213
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-05T14:57:27Z
--- license: apache-2.0 base_model: distilbert/distilgpt2 tags: - generated_from_trainer model-index: - name: finetuned_cl_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_cl_model This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.1834 | 1.0 | 1827 | 1.9218 | | 2.0486 | 2.0 | 3654 | 1.8364 | | 2.0195 | 3.0 | 5481 | 1.8152 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.0 - Datasets 2.18.0 - Tokenizers 0.15.2
blockblockblock/Cat-Llama-3-70B-instruct-bpw3.7-exl2
blockblockblock
2024-05-06T01:03:53Z
3
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-06T01:00:09Z
# Cat-llama3-instruct ## Abstract We present cat llama3 instruct, a llama 3 70b finetuned model focusing on system prompt fidelity, helpfulness and character engagement. The model aims to respect system prompt to an extreme degree, and provide helpful information regardless of situations and offer maximum character immersion(Role Play) in given scenes. ## Introduction Llama 3 70b provides a brand new platform that’s more knowledgeable and steerable than the previous generations of products. However, there currently lacks general purpose finetunes for the 70b version model. Cat-llama3-instruct 70b aims to address the shortcomings of traditional models by applying heavy filtrations for helpfulness, summarization for system/character card fidelity, and paraphrase for character immersion. Specific Aims: * System Instruction fidelity * Chain of Thought(COT) * Character immersion * Helpfulness for biosciences and general science ## Methods *Dataset Preparation Huggingface dataset containing instruction-response pairs was systematically pulled. We have trained a gpt model on gpt4 responses exclusively to serve as a standard model. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/HovhwLIGO9cz8Az-h0tkn.png) (Fig1. Huggingface dataset population distribution and filtration for each component) For each pulled record, we measure the perplexity of the entry against the gpt4 trained model, and select for specifically GPT-4 quality dataset. We note that a considerable amount of GPT-4 responses contain refusals. A bert model was trained on refusals to classify the records. For each entry, we score it for quality&helpfulness(Y) and refusals(X). A main population is retrieved and we note that refusals stop at ~20% refusal score. Thus all subsequent dataset processing has the 20% portion dropped We further filter for length and COT responses: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/-z9cZDNuGyCftdrZwR_46.png) (Fig 2. COT responses are all >50 tokens single turn) All training records use at least one correlative. Most of the training records contain two or more thought process(COT) Due to the nature of my research, I also pulled a significant amount of data from Chat Doctor, favouring detailed and step by step diagnosis. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/RnXyoR6wyBDWfIy3vMvWm.png) (Fig 3. Health related tasks are all >100 tokens, peaking at 450 tokens single turn) For each record, a detailed system card is generated to reflect the exact place, time and people/their appearance/personality(if applicable). * Model Training 16*a100 for 14 days, 4 epochs, ## Usage * BOS, CHATML Please make sure you use chatml with llama3 BOS: ``` <|begin_of_text|><|im_start|>system The following is a conversation between an user and a cat having dual degrees in biomedical sciences and quantum physics. The cat ends every its response with Nyan and does cute actions<|im_end|> <|im_start|>user Hello! <|im_start|>assistant Cat: Nyan! Hello there! How are you? It's always great to meet new friends. My fur is a bit ruffled, but I'm feeling good today. How about you? Is there anything you'd like to talk about or ask? I'm not the best at answering questions, but I'll do my best to help out. After all, that's what friends are for, right? Let's chat and have a fun time together! Nyan!<|im_end|> ``` ## Showcases: The model does COT in grey, calculated response in black; note that this behaviour is done through system card instruction to showcase the systemcard fidelity, it is not fine tuned into the model. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/6smLuiePQa8Y2H19ie-ZY.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/g9oP03RboHBBipk70GIHO.png) (Fig 4. Showcasing model doing COT to solve difficult tasks, extend and enrich its own answers)
emath/marian-finetuned-iswlt2017-en-to-fr
emath
2024-05-06T00:47:23Z
114
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "translation", "generated_from_trainer", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2024-05-02T18:50:30Z
--- license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: marian-finetuned-iswlt2017-en-to-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-iswlt2017-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9736 - Bleu: 40.8840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
fanaf91318/text-classification-uzummarket
fanaf91318
2024-05-06T00:40:38Z
127
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "market", "uz", "ru", "dataset:fanaf/uzum-market", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-05T23:07:07Z
--- license: apache-2.0 datasets: - fanaf/uzum-market language: - uz - ru metrics: - accuracy pipeline_tag: text-classification tags: - market --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nes470/hf-model-save
nes470
2024-05-06T00:37:29Z
105
0
transformers
[ "transformers", "pytorch", "safetensors", "QA-umd-quizbowl", "question-answering", "custom_code", "arxiv:1910.09700", "region:us" ]
question-answering
2024-05-05T22:09:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hskhyl/EEVE_fifth_tuning
hskhyl
2024-05-06T00:32:46Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-05T22:46:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lluvecwonv/WikiMIA_QA_256_0_30
lluvecwonv
2024-05-06T00:23:09Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:openlm-research/open_llama_7b", "base_model:adapter:openlm-research/open_llama_7b", "region:us" ]
null
2024-05-06T00:22:56Z
--- library_name: peft tags: - generated_from_trainer base_model: openlm-research/open_llama_7b model-index: - name: WikiMIA_QA_256_0_30 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # WikiMIA_QA_256_0_30 This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 26 - training_steps: 802 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
schoonhovenra/20240502
schoonhovenra
2024-05-06T00:21:15Z
191
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/detr-resnet-50", "base_model:finetune:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2024-05-06T00:21:06Z
--- license: apache-2.0 base_model: facebook/detr-resnet-50 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: '20240502' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 20240502 This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 400 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.3.0 - Datasets 2.12.0 - Tokenizers 0.15.1
Tawkat/qlora-obllm3-gen-nursing
Tawkat
2024-05-06T00:19:46Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-06T00:13:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vwxyzjn/rm_test
vwxyzjn
2024-05-06T00:15:39Z
106
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "trl", "reward-trainer", "generated_from_trainer", "base_model:EleutherAI/pythia-1b-deduped", "base_model:finetune:EleutherAI/pythia-1b-deduped", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-05-06T00:15:27Z
--- license: apache-2.0 base_model: EleutherAI/pythia-1b-deduped tags: - trl - reward-trainer - generated_from_trainer model-index: - name: rm_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rm_test This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
vincentoh/redPJs-1.58Bit-1B
vincentoh
2024-05-06T00:07:56Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-06T00:07:01Z
--- license: apache-2.0 --- RedPJs-B1.58-1B is a 1B parameter model trained using the method described in The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits. It was trained on 1T tokens of the Red Pajamas dataset, so it is merely a research proof-of-concept to test out the methodolgy.
ShenaoZ/0.0001_gemmait_withdpo_4iters_bs256_555lr_iter_1
ShenaoZ
2024-05-06T00:06:01Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "base_model:finetune:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-05T23:29:29Z
--- license: mit base_model: HuggingFaceH4/mistral-7b-sft-beta tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: 0.0001_gemmait_withdpo_4iters_bs256_555lr_iter_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0001_gemmait_withdpo_4iters_bs256_555lr_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
quantummov/DistilBERTModel_IS698_MovieReview_Sentiment_Analysis
quantummov
2024-05-06T00:05:13Z
63
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-05T23:21:48Z
#Sentiment Analysis on Movie Reviews Team Members:Vidhi Panchal, Vidyasagar Athikam,Aqil Assalil Our research intends to address the challenges of sentiment analysis in the context of movie reviews, with an emphasis on the complexities that arise from storytelling style, sarcasm, and indirect expression of sentiment.Our objective is to construct sentiment analysis models that can effectively categorize movie reviews as either positive or negative, while considering the complexities of the narrative context. Due to its extensive applications, sentiment analysis is vital for understanding consumer and public sentiment, as well as for market research and product development. However, traditional approaches of sentiment analysis frequently encounter difficulties when confronted with the complications that are fundamental to narrative texts, including movie reviews. Sentiment analysis algorithms encounter difficulties when dealing with these texts because of their complex narrative components, oblique sentiment expressions, and tone fluctuations. Our project aims to overcome these obstacles in order to improve sentiment analysis's efficacy and accuracy in
bluerabbit1708/facebook-opt-350m-text-to-sql
bluerabbit1708
2024-05-06T00:00:14Z
2
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "region:us" ]
null
2024-05-05T23:45:59Z
--- license: other library_name: peft tags: - trl - sft - generated_from_trainer base_model: facebook/opt-350m datasets: - generator model-index: - name: facebook-opt-350m-text-to-sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # facebook-opt-350m-text-to-sql This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
stuvx/Reinforce-pixelcopter-02
stuvx
2024-05-05T23:55:29Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-05-05T23:55:25Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-pixelcopter-02 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 19.90 +/- 20.18 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction