modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-27 06:27:46
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
499 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-27 06:26:25
card
stringlengths
11
1.01M
ContextualAI/archangel_sft-slic_pythia12-0b
ContextualAI
2024-01-11T19:35:24Z
18
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-08T15:42:18Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia12-0b</b> - optimized with the loss <b>SFT+SLIC</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_sft_pythia12-0b
ContextualAI
2024-01-11T19:35:18Z
20
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T19:44:22Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia12-0b</b> - optimized with the loss <b>SFT</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_kto_pythia12-0b
ContextualAI
2024-01-11T19:35:16Z
23
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-26T00:06:01Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia12-0b</b> - optimized with the loss <b>KTO</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_sft-slic_pythia6-9b
ContextualAI
2024-01-11T19:35:09Z
15
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-08T11:23:08Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia6-9b</b> - optimized with the loss <b>SFT+SLIC</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_sft-dpo_pythia6-9b
ContextualAI
2024-01-11T19:35:05Z
26
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-02T22:54:18Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia6-9b</b> - optimized with the loss <b>SFT+DPO</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_sft-ppo_pythia6-9b
ContextualAI
2024-01-11T19:35:04Z
31
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-03T07:21:03Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia6-9b</b> - optimized with the loss <b>SFT+PPO</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_sft_pythia6-9b
ContextualAI
2024-01-11T19:35:02Z
43
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T19:39:34Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia6-9b</b> - optimized with the loss <b>SFT</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_dpo_pythia6-9b
ContextualAI
2024-01-11T19:34:59Z
20
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T22:23:58Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia6-9b</b> - optimized with the loss <b>DPO</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_csft_pythia2-8b
ContextualAI
2024-01-11T19:34:57Z
108
2
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-08T09:00:45Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia2-8b</b> - optimized with the loss <b>CSFT</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_sft-csft_pythia2-8b
ContextualAI
2024-01-11T19:34:56Z
112
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-08T08:21:19Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia2-8b</b> - optimized with the loss <b>SFT+CSFT</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_sft-slic_pythia2-8b
ContextualAI
2024-01-11T19:34:55Z
108
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-08T08:50:34Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia2-8b</b> - optimized with the loss <b>SFT+SLIC</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_sft-kto_pythia2-8b
ContextualAI
2024-01-11T19:34:54Z
114
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-03T03:14:51Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia2-8b</b> - optimized with the loss <b>SFT+KTO</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_sft-dpo_pythia2-8b
ContextualAI
2024-01-11T19:34:53Z
143
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-02T22:44:10Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia2-8b</b> - optimized with the loss <b>SFT+DPO</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_slic_pythia2-8b
ContextualAI
2024-01-11T19:34:50Z
19
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T21:00:05Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia2-8b</b> - optimized with the loss <b>SLIC</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_dpo_pythia2-8b
ContextualAI
2024-01-11T19:34:47Z
22
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T22:19:21Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia2-8b</b> - optimized with the loss <b>DPO</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_sft-csft_pythia1-4b
ContextualAI
2024-01-11T19:34:44Z
105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-08T07:25:13Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia1-4b</b> - optimized with the loss <b>SFT+CSFT</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. For models trained with our conditional SFT model, the tokenizers have additional tokens `<|good|>` and `<|bad|>` included in the embeddings. To generate with these control tokens in the context, postpend either to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_sft-dpo_pythia1-4b
ContextualAI
2024-01-11T19:34:41Z
120
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-02T22:38:07Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia1-4b</b> - optimized with the loss <b>SFT+DPO</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_sft_pythia1-4b
ContextualAI
2024-01-11T19:34:37Z
211
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T19:33:05Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia1-4b</b> - optimized with the loss <b>SFT</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
ContextualAI/archangel_dpo_pythia1-4b
ContextualAI
2024-01-11T19:34:35Z
109
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "human feedback", "rlhf", "preferences", "alignment", "HALO", "halos", "dpo", "rl", "en", "dataset:stanfordnlp/SHP", "dataset:Anthropic/hh-rlhf", "dataset:OpenAssistant/oasst1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-25T22:16:34Z
--- license: apache-2.0 datasets: - stanfordnlp/SHP - Anthropic/hh-rlhf - OpenAssistant/oasst1 language: - en metrics: - accuracy tags: - human feedback - rlhf - preferences - alignment - HALO - halos - dpo - rl --- ![halos](https://gist.github.com/assets/29318529/fe2d8391-dbd1-4b7e-9dc4-7cb97e55bc06) This repo contains the model checkpoints for: - model family <b>pythia1-4b</b> - optimized with the loss <b>DPO</b> - aligned using the SHP, Anthropic HH and Open Assistant datasets. To prompt Archangel models, ensure that the format is consistent with that of TuluV2. For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role. The human should speak first: ``` <|user|> Hi! I'm looking for a cake recipe. <|assistant|> What kind of cake? <|user|> Chocolate cake. <|assistant|> ``` Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt. Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards. If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf): ``` @techreport{ethayarajh2023halos, author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe}, title = {Human-Centered Loss Functions (HALOs)}, institution = {Contextual AI}, note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf}, year = {2023}, } ```
LoneStriker/TowerInstruct-7B-v0.1-8.0bpw-h8-exl2
LoneStriker
2024-01-11T19:28:10Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "translation", "en", "de", "fr", "zh", "pt", "nl", "ru", "ko", "it", "es", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2024-01-11T19:25:17Z
--- license: cc-by-nc-4.0 language: - en - de - fr - zh - pt - nl - ru - ko - it - es metrics: - comet pipeline_tag: translation --- # Model Card for TowerInstruct-7B-v0.1 ## Model Details ### Model Description TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.1 is the first model in the series. The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation. We will release more details in the upcoming technical report. - **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay - **Model type:** A 7B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions. - **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian - **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved. - **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-7B-v0.1) ## Intended uses & limitations The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources: - Translation - Automatic Post Edition - Machine Translation Evaluation - Context-aware Translation - Terminology-aware Translation - Multi-reference Translation - Named-entity Recognition - Paraphrase Generation - Synthetic Chat data - Code instructions You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline(“text-generation”, model=“Unbabel/TowerInstruct-v0.1“, torch_dtype=torch.bfloat16, device_map=“auto”) # We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {“role”: “user”, “content”: “Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:“}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=False) print(outputs[0][“generated_text”]) # <|im_start|>user # Translate the following text from Portuguese into English. # Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução. # English:<|im_end|> # <|im_start|>assistant # A group of researchers has launched a new model for translation-related tasks. ``` ### Out-of-Scope Use The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant. ## Bias, Risks, and Limitations TowerInstruct-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements). ## Prompt Format TowerInstruct-v0.1 was trained using the ChatML prompt templates without any system prompts. An example follows below: ``` <|im_start|>user {USER PROMPT}<|im_end|> <|im_start|>assistant {MODEL RESPONSE}<|im_end|> <|im_start|>user [...] ``` ### Supervised tasks The prompts for all supervised tasks can be found in [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal. ## Training Details ### Training Data Link to [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). #### Training Hyperparameters The following hyperparameters were used during training: - total_train_batch_size: 256 - learning_rate: 7e-06 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - weight_decay: 0.01 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - num_epochs: 4 - max_seq_length: 2048 ## Citation To be completed. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
jysssacc/mt0-base_IA3_lr0.005_bs4_epoch5_wd0.01
jysssacc
2024-01-11T19:25:31Z
2
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/mt0-base", "base_model:adapter:bigscience/mt0-base", "license:apache-2.0", "region:us" ]
null
2024-01-11T19:23:20Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/mt0-base model-index: - name: mt0-base_IA3_lr0.005_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt0-base_IA3_lr0.005_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0006 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4316 | 1.0 | 157 | 0.0292 | | 0.0358 | 2.0 | 314 | 0.0028 | | 0.0237 | 3.0 | 471 | 0.0012 | | 0.0174 | 4.0 | 628 | 0.0007 | | 0.0131 | 5.0 | 785 | 0.0006 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
jysssacc/bloomz-560m_lora_lr0.05_bs4_epoch5_wd0.01
jysssacc
2024-01-11T19:25:29Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2024-01-11T19:24:14Z
--- license: bigscience-bloom-rail-1.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/bloomz-560m model-index: - name: bloomz-560m_lora_lr0.05_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloomz-560m_lora_lr0.05_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.2360 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.7951 | 1.0 | 157 | 23.6896 | | 7.958 | 2.0 | 314 | 12.3009 | | 9.051 | 3.0 | 471 | 8.2580 | | 7.2931 | 4.0 | 628 | 7.2896 | | 7.103 | 5.0 | 785 | 7.2360 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-11T19:24:31Z
25
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "CobraMamba/mamba-gpt-7b-v1", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T19:19:23Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - CobraMamba/mamba-gpt-7b-v1 --- # mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.2-slerp mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [CobraMamba/mamba-gpt-7b-v1](https://huggingface.co/CobraMamba/mamba-gpt-7b-v1) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: CobraMamba/mamba-gpt-7b-v1 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/mamba-gpt-7b-v1-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
facebook/sam-vit-large
facebook
2024-01-11T19:23:46Z
56,740
28
transformers
[ "transformers", "pytorch", "tf", "safetensors", "sam", "mask-generation", "vision", "arxiv:2304.02643", "license:apache-2.0", "endpoints_compatible", "region:us" ]
mask-generation
2023-04-19T14:17:03Z
--- license: apache-2.0 tags: - vision --- # Model Card for Segment Anything Model (SAM) - ViT Large (ViT-L) version <p> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-architecture.png" alt="Model architecture"> <em> Detailed architecture of Segment Anything Model (SAM).</em> </p> # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Citation](#citation) # TL;DR [Link to original repository](https://github.com/facebookresearch/segment-anything) | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-beancans.png" alt="Snow" width="600" height="600"> | <img src="https://huggingface.co/facebook/sam-vit-huge/discussions/7" alt="Forest" width="600" height="600"> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-car-seg.png" alt="Mountains" width="600" height="600"> | |---------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------| The **Segment Anything Model (SAM)** produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks. The abstract of the paper states: > We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at [https://segment-anything.com](https://segment-anything.com) to foster research into foundation models for computer vision. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the original [SAM model card](https://github.com/facebookresearch/segment-anything). # Model Details The SAM model is made up of 3 modules: - The `VisionEncoder`: a VIT based image encoder. It computes the image embeddings using attention on patches of the image. Relative Positional Embedding is used. - The `PromptEncoder`: generates embeddings for points and bounding boxes - The `MaskDecoder`: a two-ways transformer which performs cross attention between the image embedding and the point embeddings (->) and between the point embeddings and the image embeddings. The outputs are fed - The `Neck`: predicts the output masks based on the contextualized masks produced by the `MaskDecoder`. # Usage ## Prompted-Mask-Generation ```python from PIL import Image import requests from transformers import SamModel, SamProcessor model = SamModel.from_pretrained("facebook/sam-vit-large") processor = SamProcessor.from_pretrained("facebook/sam-vit-large") img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") input_points = [[[450, 600]]] # 2D localization of a window ``` ```python inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to("cuda") outputs = model(**inputs) masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()) scores = outputs.iou_scores ``` Among other arguments to generate masks, you can pass 2D locations on the approximate position of your object of interest, a bounding box wrapping the object of interest (the format should be x, y coordinate of the top right and bottom left point of the bounding box), a segmentation mask. At this time of writing, passing a text as input is not supported by the official model according to [the official repository](https://github.com/facebookresearch/segment-anything/issues/4#issuecomment-1497626844). For more details, refer to this notebook, which shows a walk throught of how to use the model, with a visual example! ## Automatic-Mask-Generation The model can be used for generating segmentation masks in a "zero-shot" fashion, given an input image. The model is automatically prompt with a grid of `1024` points which are all fed to the model. The pipeline is made for automatic mask generation. The following snippet demonstrates how easy you can run it (on any device! Simply feed the appropriate `points_per_batch` argument) ```python from transformers import pipeline generator = pipeline("mask-generation", device = 0, points_per_batch = 256) image_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" outputs = generator(image_url, points_per_batch = 256) ``` Now to display the image: ```python import matplotlib.pyplot as plt from PIL import Image import numpy as np def show_mask(mask, ax, random_color=False): if random_color: color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0) else: color = np.array([30 / 255, 144 / 255, 255 / 255, 0.6]) h, w = mask.shape[-2:] mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1) ax.imshow(mask_image) plt.imshow(np.array(raw_image)) ax = plt.gca() for mask in outputs["masks"]: show_mask(mask, ax=ax, random_color=True) plt.axis("off") plt.show() ``` # Citation If you use this model, please use the following BibTeX entry. ``` @article{kirillov2023segany, title={Segment Anything}, author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross}, journal={arXiv:2304.02643}, year={2023} } ```
facebook/sam-vit-base
facebook
2024-01-11T19:23:17Z
754,915
124
transformers
[ "transformers", "pytorch", "tf", "safetensors", "sam", "mask-generation", "vision", "arxiv:2304.02643", "license:apache-2.0", "endpoints_compatible", "region:us" ]
mask-generation
2023-04-19T14:15:29Z
--- license: apache-2.0 tags: - vision --- # Model Card for Segment Anything Model (SAM) - ViT Base (ViT-B) version <p> <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-architecture.png" alt="Model architecture"> <em> Detailed architecture of Segment Anything Model (SAM).</em> </p> # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Citation](#citation) # TL;DR [Link to original repository](https://github.com/facebookresearch/segment-anything) | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-beancans.png" alt="Snow" width="600" height="600"> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-dog-masks.png" alt="Forest" width="600" height="600"> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/sam-car-seg.png" alt="Mountains" width="600" height="600"> | |---------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------| The **Segment Anything Model (SAM)** produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks. The abstract of the paper states: > We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at [https://segment-anything.com](https://segment-anything.com) to foster research into foundation models for computer vision. **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the original [SAM model card](https://github.com/facebookresearch/segment-anything). # Model Details The SAM model is made up of 3 modules: - The `VisionEncoder`: a VIT based image encoder. It computes the image embeddings using attention on patches of the image. Relative Positional Embedding is used. - The `PromptEncoder`: generates embeddings for points and bounding boxes - The `MaskDecoder`: a two-ways transformer which performs cross attention between the image embedding and the point embeddings (->) and between the point embeddings and the image embeddings. The outputs are fed - The `Neck`: predicts the output masks based on the contextualized masks produced by the `MaskDecoder`. # Usage ## Prompted-Mask-Generation ```python from PIL import Image import requests from transformers import SamModel, SamProcessor model = SamModel.from_pretrained("facebook/sam-vit-base") processor = SamProcessor.from_pretrained("facebook/sam-vit-base") img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") input_points = [[[450, 600]]] # 2D localization of a window ``` ```python inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to("cuda") outputs = model(**inputs) masks = processor.image_processor.post_process_masks(outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()) scores = outputs.iou_scores ``` Among other arguments to generate masks, you can pass 2D locations on the approximate position of your object of interest, a bounding box wrapping the object of interest (the format should be x, y coordinate of the top right and bottom left point of the bounding box), a segmentation mask. At this time of writing, passing a text as input is not supported by the official model according to [the official repository](https://github.com/facebookresearch/segment-anything/issues/4#issuecomment-1497626844). For more details, refer to this notebook, which shows a walk throught of how to use the model, with a visual example! ## Automatic-Mask-Generation The model can be used for generating segmentation masks in a "zero-shot" fashion, given an input image. The model is automatically prompt with a grid of `1024` points which are all fed to the model. The pipeline is made for automatic mask generation. The following snippet demonstrates how easy you can run it (on any device! Simply feed the appropriate `points_per_batch` argument) ```python from transformers import pipeline generator = pipeline("mask-generation", device = 0, points_per_batch = 256) image_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" outputs = generator(image_url, points_per_batch = 256) ``` Now to display the image: ```python import matplotlib.pyplot as plt from PIL import Image import numpy as np def show_mask(mask, ax, random_color=False): if random_color: color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0) else: color = np.array([30 / 255, 144 / 255, 255 / 255, 0.6]) h, w = mask.shape[-2:] mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1) ax.imshow(mask_image) plt.imshow(np.array(raw_image)) ax = plt.gca() for mask in outputs["masks"]: show_mask(mask, ax=ax, random_color=True) plt.axis("off") plt.show() ``` # Citation If you use this model, please use the following BibTeX entry. ``` @article{kirillov2023segany, title={Segment Anything}, author={Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross}, journal={arXiv:2304.02643}, year={2023} } ```
brucethemoose/SUS-Bagel-200K-DARE-Test
brucethemoose
2024-01-11T19:17:29Z
57
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "Yi", "en", "arxiv:2311.03099", "arxiv:2306.01708", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-06T09:17:17Z
--- license: other license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE language: - en library_name: transformers base_model: [] tags: - mergekit - merge - Yi --- Just an experiment to try and extend the context of SUS, a 4K Yi model, and DPO Bagel, which breaks down after 4K context. Yi 4K was used as a base (even for bagel which is technically a Yi 200K model), and Yi 200K is merged in with a density of 1. I wanted to include Hermes 34B, but something funky about its tokenizer breaks mergekit. A component of another merge. Auto generated mergekit description below: *** This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama as a base. ### Models Merged The following models were included in the merge: * /home/alpha/Models/Raw/SUSTech_SUS-Chat-34B * /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama * /home/alpha/Models/Raw/jondurbin_bagel-34b-v0.2 * /home/alpha/Models/Raw/jondurbin_bagel-dpo-34b-v0.2 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama # No parameters necessary for base model - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama parameters: weight: 0.5 density: 1 - model: /home/alpha/Models/Raw/SUSTech_SUS-Chat-34B parameters: weight: 0.2 density: 0.12 - model: /home/alpha/Models/Raw/jondurbin_bagel-dpo-34b-v0.2 parameters: weight: 0.2 density: 0.15 - model: /home/alpha/Models/Raw/jondurbin_bagel-34b-v0.2 parameters: weight: 0.1 density: 0.12 merge_method: dare_ties tokenizer_source: union base_model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama parameters: int8_mask: true dtype: bfloat16 ```
Lanxi24/AAA-aaa
Lanxi24
2024-01-11T19:16:10Z
2
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "region:us" ]
text-to-image
2024-01-11T19:14:58Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/images (8).jfif base_model: runwayml/stable-diffusion-v1-5 instance_prompt: null --- # ns-NS <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Lanxi24/AAA-aaa/tree/main) them in the Files & versions tab.
Aedelon/Reinforce-CartPole1
Aedelon
2024-01-11T19:08:37Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-11T14:41:38Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
LoneStriker/TowerInstruct-7B-v0.1-3.0bpw-h6-exl2
LoneStriker
2024-01-11T19:07:49Z
12
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "translation", "en", "de", "fr", "zh", "pt", "nl", "ru", "ko", "it", "es", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
translation
2024-01-11T19:06:25Z
--- license: cc-by-nc-4.0 language: - en - de - fr - zh - pt - nl - ru - ko - it - es metrics: - comet pipeline_tag: translation --- # Model Card for TowerInstruct-7B-v0.1 ## Model Details ### Model Description TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.1 is the first model in the series. The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and document-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation. We will release more details in the upcoming technical report. - **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay - **Model type:** A 7B parameter model fine-tuned on a mix of publicly available, synthetic datasets on translation-related tasks, as well as conversational datasets and code instructions. - **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian - **License:** CC-BY-NC-4.0, Llama 2 is licensed under the [LLAMA 2 Community License](https://ai.meta.com/llama/license/), Copyright © Meta Platforms, Inc. All Rights Reserved. - **Finetuned from model:** [TowerBase](https://huggingface.co/Unbabel/TowerBase-7B-v0.1) ## Intended uses & limitations The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources: - Translation - Automatic Post Edition - Machine Translation Evaluation - Context-aware Translation - Terminology-aware Translation - Multi-reference Translation - Named-entity Recognition - Paraphrase Generation - Synthetic Chat data - Code instructions You can find the dataset and all data sources of [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1) here. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate import torch from transformers import pipeline pipe = pipeline(“text-generation”, model=“Unbabel/TowerInstruct-v0.1“, torch_dtype=torch.bfloat16, device_map=“auto”) # We use the tokenizer’s chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {“role”: “user”, “content”: “Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:“}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=False) print(outputs[0][“generated_text”]) # <|im_start|>user # Translate the following text from Portuguese into English. # Portuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução. # English:<|im_end|> # <|im_start|>assistant # A group of researchers has launched a new model for translation-related tasks. ``` ### Out-of-Scope Use The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant. ## Bias, Risks, and Limitations TowerInstruct-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements). ## Prompt Format TowerInstruct-v0.1 was trained using the ChatML prompt templates without any system prompts. An example follows below: ``` <|im_start|>user {USER PROMPT}<|im_end|> <|im_start|>assistant {MODEL RESPONSE}<|im_end|> <|im_start|>user [...] ``` ### Supervised tasks The prompts for all supervised tasks can be found in [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). We have used multiple prompt templates for each task. While different prompts may offer different outputs, the difference in downstream performance should be very minimal. ## Training Details ### Training Data Link to [TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1). #### Training Hyperparameters The following hyperparameters were used during training: - total_train_batch_size: 256 - learning_rate: 7e-06 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - weight_decay: 0.01 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - num_epochs: 4 - max_seq_length: 2048 ## Citation To be completed. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
jysssacc/627_roberta-base_lora_lr0.0005_bs4_epoch5_wd0.01
jysssacc
2024-01-11T19:01:13Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2024-01-11T18:55:44Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: roberta-base model-index: - name: 627_roberta-base_lora_lr0.0005_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 627_roberta-base_lora_lr0.0005_bs4_epoch5_wd0.01 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1116 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 12.164 | 1.0 | 157 | 2.8251 | | 2.1146 | 2.0 | 314 | 1.1406 | | 1.3065 | 3.0 | 471 | 0.4207 | | 0.5311 | 4.0 | 628 | 0.1704 | | 0.3509 | 5.0 | 785 | 0.1116 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
armhebb/65995e622d50edfb3ead9268
armhebb
2024-01-11T18:59:45Z
4
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "license:openrail++", "region:us" ]
text-to-image
2024-01-11T16:55:01Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a photo in the style of The dataset has already been processed with this model.' instance_prompt: a photo in the style of The dataset has already been processed with this model. license: openrail++ --- # SDXL LoRA DreamBooth - armhebb/65995e622d50edfb3ead9268 <Gallery /> ## Model description ### These are armhebb/65995e622d50edfb3ead9268 LoRA adaption weights. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`/kohl_s_sonoma__checkpoints.safetensors` here 💾](/armhebb/65995e622d50edfb3ead9268/blob/main//kohl_s_sonoma__checkpoints.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:/kohl_s_sonoma__checkpoints:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`/kohl_s_sonoma__checkpoints_emb.safetensors` here 💾](/armhebb/65995e622d50edfb3ead9268/blob/main//kohl_s_sonoma__checkpoints_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `/kohl_s_sonoma__checkpoints_emb` to your prompt. For example, `a photo in the style of The dataset has already been processed with this model.` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('armhebb/65995e622d50edfb3ead9268', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='armhebb/65995e622d50edfb3ead9268', filename='/kohl_s_sonoma__checkpoints_emb.safetensors' repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('a photo in the style of The dataset has already been processed with this model.').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `Thedatasethasalreadybeenprocessedwiththismodel.` → use `<s0>` in your prompt ## Details All [Files & versions](/armhebb/65995e622d50edfb3ead9268/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: None.
Rafaelfr87/ppo-LunarLander-v2-CleanRL
Rafaelfr87
2024-01-11T18:54:35Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2024-01-11T18:54:28Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -130.11 +/- 82.52 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Rafaelfr87/ppo-LunarLander-v2-CleanRL' 'batch_size': 512 'minibatch_size': 128} ```
yaizaa/gradio-test
yaizaa
2024-01-11T18:53:12Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-01-11T18:53:10Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
yaizaa/yaizaa
yaizaa
2024-01-11T18:52:19Z
0
0
fastai
[ "fastai", "region:us" ]
null
2024-01-11T18:52:17Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-11T18:45:00Z
23
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "CobraMamba/mamba-gpt-7b-v2", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T18:39:49Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - CobraMamba/mamba-gpt-7b-v2 --- # mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.2-slerp mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [CobraMamba/mamba-gpt-7b-v2](https://huggingface.co/CobraMamba/mamba-gpt-7b-v2) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: CobraMamba/mamba-gpt-7b-v2 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/mamba-gpt-7b-v2-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
TheBloke/Nous-Capybara-limarpv3-34B-GPTQ
TheBloke
2024-01-11T18:44:27Z
24
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "base_model:Doctor-Shotgun/Nous-Capybara-limarpv3-34B", "base_model:quantized:Doctor-Shotgun/Nous-Capybara-limarpv3-34B", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-01-11T15:36:21Z
--- base_model: Doctor-Shotgun/Nous-Capybara-limarpv3-34B inference: false license: other model_creator: Doctor Shotgun model_name: Nous Capybara Limarpv3 34B model_type: llama prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Nous Capybara Limarpv3 34B - GPTQ - Model creator: [Doctor Shotgun](https://huggingface.co/Doctor-Shotgun) - Original model: [Nous Capybara Limarpv3 34B](https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B) <!-- description start --> # Description This repo contains GPTQ model files for [Doctor Shotgun's Nous Capybara Limarpv3 34B](https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GGUF) * [Doctor Shotgun's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 18.60 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 19.25 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 21.21 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 15.03 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 35.34 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 16.90 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 36.11 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/Nous-Capybara-limarpv3-34B-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Nous-Capybara-limarpv3-34B-GPTQ:gptq-4bit-128g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `Nous-Capybara-limarpv3-34B-GPTQ`: ```shell mkdir Nous-Capybara-limarpv3-34B-GPTQ huggingface-cli download TheBloke/Nous-Capybara-limarpv3-34B-GPTQ --local-dir Nous-Capybara-limarpv3-34B-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir Nous-Capybara-limarpv3-34B-GPTQ huggingface-cli download TheBloke/Nous-Capybara-limarpv3-34B-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Nous-Capybara-limarpv3-34B-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir Nous-Capybara-limarpv3-34B-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nous-Capybara-limarpv3-34B-GPTQ --local-dir Nous-Capybara-limarpv3-34B-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Nous-Capybara-limarpv3-34B-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Nous-Capybara-limarpv3-34B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/Nous-Capybara-limarpv3-34B-GPTQ:gptq-4bit-128g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `Nous-Capybara-limarpv3-34B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/Nous-Capybara-limarpv3-34B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' client = InferenceClient(endpoint_url) response = client.text_generation( prompt_template, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/Nous-Capybara-limarpv3-34B-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-128g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''{prompt} ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Doctor Shotgun's Nous Capybara Limarpv3 34B No original model card was available.
Feluda/Final_Fine_Tuned_Legal_Led
Feluda
2024-01-11T18:42:05Z
124
0
transformers
[ "transformers", "safetensors", "led", "text2text-generation", "summarization", "base_model:nsi319/legal-led-base-16384", "base_model:finetune:nsi319/legal-led-base-16384", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-12-27T19:00:24Z
--- license: mit library_name: transformers base_model: nsi319/legal-led-base-16384 model-index: - name: results results: [] pipeline_tag: summarization --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [nsi319/legal-led-base-16384](https://huggingface.co/nsi319/legal-led-base-16384) on the joelniklaus/legal_case_document_summarization dataset. It achieves the following results on the evaluation set: - Loss: 2.7401 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2 | 1.0 | 1924 | 2.8550 | | 3.6193 | 2.0 | 3848 | 2.7593 | | 2.7776 | 3.0 | 5772 | 2.7401 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.0 - Pytorch 2.0.0 - Datasets 2.1.0 - Tokenizers 0.15.0
fasterinnerlooper/bloomz-csharp
fasterinnerlooper
2024-01-11T18:37:58Z
0
0
pytorch
[ "pytorch", "safetensors", "en", "arxiv:1910.09700", "license:mit", "region:us" ]
null
2024-01-11T17:21:53Z
--- language: en license: mit library_name: pytorch --- # Model Card for fasterinnerlooper/bigscience/bloomz-1b7 <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> Bloomz trained on sql-create-context, using PEFT - **Developed by:** Shafiq Jetha - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** en - **License:** mit - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Bytes512/Waterbuck
Bytes512
2024-01-11T18:30:38Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:ChaiML/season_4_top_solution", "base_model:merge:ChaiML/season_4_top_solution", "base_model:Heralax/Augmental-13b-v1.50_B", "base_model:merge:Heralax/Augmental-13b-v1.50_B", "base_model:NeverSleep/Noromaid-13b-v0.3", "base_model:merge:NeverSleep/Noromaid-13b-v0.3", "base_model:TheBloke/Llama-2-13B-fp16", "base_model:merge:TheBloke/Llama-2-13B-fp16", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T18:30:38Z
--- base_model: - Heralax/Augmental-13b-v1.50_B - ChaiML/season_4_top_solution - NeverSleep/Noromaid-13b-v0.3 - TheBloke/Llama-2-13B-fp16 - Fredithefish/RP_Base tags: - mergekit - merge --- # waterbuck This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base. ### Models Merged The following models were included in the merge: * [Heralax/Augmental-13b-v1.50_B](https://huggingface.co/Heralax/Augmental-13b-v1.50_B) * [ChaiML/season_4_top_solution](https://huggingface.co/ChaiML/season_4_top_solution) * [NeverSleep/Noromaid-13b-v0.3](https://huggingface.co/NeverSleep/Noromaid-13b-v0.3) * [Fredithefish/RP_Base](https://huggingface.co/Fredithefish/RP_Base) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Heralax/Augmental-13b-v1.50_B parameters: density: 0.5 weight: 0.3 - model: Fredithefish/RP_Base parameters: density: 0.5 weight: 0.6 - model: NeverSleep/Noromaid-13b-v0.3 parameters: density: 0.5 weight: 0.5 - model: ChaiML/season_4_top_solution parameters: density: 0.5 weight: 0.5 base_model: TheBloke/Llama-2-13B-fp16 merge_method: dare_ties parameters: normalize: 1.0 ```
sessex/mm-tabi-boot-product
sessex
2024-01-11T18:30:13Z
25
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-04T20:05:05Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: a photo of purple <s0><s1> output: url: image-0.png - text: a photo of brown <s0><s1> output: url: image-1.png - text: a photo of tan <s0><s1> output: url: image-2.png - text: a photo of metallic <s0><s1> output: url: image-3.png - text: a photo of tan <s0><s1> output: url: image-4.png - text: a photo of grey <s0><s1> output: url: image-5.png - text: a photo of white <s0><s1> output: url: image-6.png - text: a photo of brown <s0><s1> output: url: image-7.png - text: a photo of black <s0><s1> output: url: image-8.png - text: a photo of white <s0><s1> output: url: image-9.png - text: a photo of brown <s0><s1> output: url: image-10.png - text: a photo of black <s0><s1> output: url: image-11.png - text: a photo of light blue <s0><s1> output: url: image-12.png - text: a photo of grey <s0><s1> output: url: image-13.png - text: a photo of tan <s0><s1> output: url: image-14.png - text: a photo of black <s0><s1> output: url: image-15.png - text: a photo of white <s0><s1> output: url: image-16.png - text: a photo of tan <s0><s1> output: url: image-17.png - text: a photo of grey <s0><s1> output: url: image-18.png - text: a photo of black <s0><s1> output: url: image-19.png - text: a photo of metallic <s0><s1> output: url: image-20.png - text: a photo of black <s0><s1> output: url: image-21.png - text: a photo of black <s0><s1> output: url: image-22.png - text: a photo of black <s0><s1> output: url: image-23.png - text: a photo of gold cowboy <s0><s1> output: url: image-24.png - text: a photo of tall tan <s0><s1> output: url: image-25.png - text: a photo of tall white <s0><s1> output: url: image-26.png - text: a photo of tall burgundy <s0><s1> output: url: image-27.png - text: a photo of tall black <s0><s1> output: url: image-28.png - text: a photo of tall black <s0><s1> output: url: image-29.png - text: a photo of tall white patterned <s0><s1> output: url: image-30.png - text: a photo of burgundy <s0><s1> output: url: image-31.png - text: a photo of black <s0><s1> output: url: image-32.png - text: a photo of burgundy <s0><s1> output: url: image-33.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - sessex/mm-tabi-boot-product <Gallery /> ## Model description ### These are sessex/mm-tabi-boot-product LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`mm-tabi-boot-product.safetensors` here 💾](/sessex/mm-tabi-boot-product/blob/main/mm-tabi-boot-product.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:mm-tabi-boot-product:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`mm-tabi-boot-product_emb.safetensors` here 💾](/sessex/mm-tabi-boot-product/blob/main/mm-tabi-boot-product_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `mm-tabi-boot-product_emb` to your prompt. For example, `a photo of mm-tabi-boot-product_emb` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('sessex/mm-tabi-boot-product', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='sessex/mm-tabi-boot-product', filename='mm-tabi-boot-product_emb.safetensors' repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('a photo of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/sessex/mm-tabi-boot-product/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
jysssacc/627_roberta-base_IA3_lr0.0005_bs4_epoch5_wd0.01
jysssacc
2024-01-11T18:29:35Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2024-01-11T18:24:05Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: roberta-base model-index: - name: 627_roberta-base_IA3_lr0.0005_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 627_roberta-base_IA3_lr0.0005_bs4_epoch5_wd0.01 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 19.5038 | 1.0 | 157 | 18.4784 | | 8.9314 | 2.0 | 314 | 5.8444 | | 5.4493 | 3.0 | 471 | 4.2744 | | 3.8713 | 4.0 | 628 | 3.5329 | | 3.538 | 5.0 | 785 | 3.3456 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
LoneStriker/Open_Gpt4_8x7B_v0.2-4.0bpw-h6-exl2
LoneStriker
2024-01-11T18:18:48Z
7
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "merge", "moe", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T16:27:32Z
--- license: apache-2.0 tags: - merge - moe --- Open_Gpt4_v0.2 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/T7QKB0fKNHQvNqAjm8zrH.jpeg) This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-8x7b-v0.2 with MixtralOrochi8x7B being the Base model. I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct, Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊 This is the second iteration of this model, using better models in the merger to improve performance (hopefully). Base model: - https://huggingface.co/smelborp/MixtralOrochi8x7B Merged models: - https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 - https://huggingface.co/jondurbin/bagel-8x7b-v0.2 Instruct template: Alpaca Merger config: ``` models: - model: Mixtral-8x7B-Instruct-v0.1 parameters: density: .5 weight: .7 - model: bagel-8x7b-v0.2 parameters: density: .5 weight: 1 merge_method: ties base_model: MixtralOrochi8x7B parameters: normalize: true int8_mask: true dtype: float16 ```
itzzdeep/youtube-thumbnails-sdxl-lora-v2
itzzdeep
2024-01-11T18:18:34Z
4
3
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-11T12:53:22Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'instance_prompt' base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: instance_prompt license: openrail++ --- # SDXL LoRA DreamBooth - itzzdeep/youtube-thumbnails-sdxl-lora-v2 <Gallery /> ## Model description ### These are itzzdeep/youtube-thumbnails-sdxl-lora-v2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`youtube-thumbnails-sdxl-lora-v2.safetensors` here 💾](/itzzdeep/youtube-thumbnails-sdxl-lora-v2/blob/main/youtube-thumbnails-sdxl-lora-v2.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:youtube-thumbnails-sdxl-lora-v2:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`youtube-thumbnails-sdxl-lora-v2_emb.safetensors` here 💾](/itzzdeep/youtube-thumbnails-sdxl-lora-v2/blob/main/youtube-thumbnails-sdxl-lora-v2_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `youtube-thumbnails-sdxl-lora-v2_emb` to your prompt. For example, `instance_prompt` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('itzzdeep/youtube-thumbnails-sdxl-lora-v2', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='itzzdeep/youtube-thumbnails-sdxl-lora-v2', filename='youtube-thumbnails-sdxl-lora-v2_emb.safetensors' repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=[], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('instance_prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/itzzdeep/youtube-thumbnails-sdxl-lora-v2/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
sridhar1111111111111111/MedQuad_Mistral_Med_Expert
sridhar1111111111111111
2024-01-11T18:17:36Z
3
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:stabilityai/stablelm-zephyr-3b", "base_model:adapter:stabilityai/stablelm-zephyr-3b", "region:us" ]
null
2024-01-11T18:15:53Z
--- library_name: peft base_model: stabilityai/stablelm-zephyr-3b --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
jysssacc/bloomz-560m_lora_lr0.005_bs4_epoch5_wd0.01
jysssacc
2024-01-11T18:11:08Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2024-01-11T18:09:55Z
--- license: bigscience-bloom-rail-1.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/bloomz-560m model-index: - name: bloomz-560m_lora_lr0.005_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloomz-560m_lora_lr0.005_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.7814 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6311 | 1.0 | 157 | 3.5108 | | 4.3604 | 2.0 | 314 | 5.1723 | | 6.3332 | 3.0 | 471 | 7.7423 | | 11.1799 | 4.0 | 628 | 7.2969 | | 7.1242 | 5.0 | 785 | 7.7814 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
gustavokpc/IC_primeiro
gustavokpc
2024-01-11T18:06:06Z
3
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-20T15:22:09Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_keras_callback model-index: - name: gustavokpc/IC_primeiro results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # gustavokpc/IC_primeiro This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0532 - Train Accuracy: 0.9812 - Train F1 M: 0.5544 - Train Precision M: 0.4027 - Train Recall M: 0.9558 - Validation Loss: 0.2580 - Validation Accuracy: 0.9175 - Validation F1 M: 0.5588 - Validation Precision M: 0.4059 - Validation Recall M: 0.9423 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3790, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Train F1 M | Train Precision M | Train Recall M | Validation Loss | Validation Accuracy | Validation F1 M | Validation Precision M | Validation Recall M | Epoch | |:----------:|:--------------:|:----------:|:-----------------:|:--------------:|:---------------:|:-------------------:|:---------------:|:----------------------:|:-------------------:|:-----:| | 0.3533 | 0.8498 | 0.4723 | 0.4085 | 0.6530 | 0.2424 | 0.9037 | 0.5060 | 0.3909 | 0.7591 | 0 | | 0.1974 | 0.9259 | 0.5184 | 0.3930 | 0.8161 | 0.1978 | 0.9202 | 0.5425 | 0.4014 | 0.8778 | 1 | | 0.1242 | 0.9551 | 0.5382 | 0.3974 | 0.8918 | 0.1970 | 0.9248 | 0.5583 | 0.4106 | 0.9195 | 2 | | 0.0823 | 0.9705 | 0.5511 | 0.4024 | 0.9370 | 0.2550 | 0.9116 | 0.5567 | 0.4057 | 0.9330 | 3 | | 0.0532 | 0.9812 | 0.5544 | 0.4027 | 0.9558 | 0.2580 | 0.9175 | 0.5588 | 0.4059 | 0.9423 | 4 | ### Framework versions - Transformers 4.34.1 - TensorFlow 2.14.0 - Datasets 2.14.5 - Tokenizers 0.14.1
lockylocks/q-FrozenLake-v1-4x4-noSlippery
lockylocks
2024-01-11T18:02:38Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-01-11T18:02:29Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="lockylocks/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
omiro/rl_course_vizdoom_health_gathering_supreme
omiro
2024-01-11T18:01:46Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-11T18:01:36Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 9.49 +/- 6.06 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r omiro/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
ludoviciarraga/checkpoint-319
ludoviciarraga
2024-01-11T17:58:37Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:openai/whisper-large-v2", "base_model:adapter:openai/whisper-large-v2", "region:us" ]
null
2024-01-11T17:58:28Z
--- library_name: peft base_model: openai/whisper-large-v2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False ### Framework versions - PEFT 0.6.2
cezeozue/pegasus-samsum
cezeozue
2024-01-11T17:52:22Z
89
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-11T17:33:11Z
--- license: apache-2.0 base_model: t5-small tags: - generated_from_trainer model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8682 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0507 | 0.54 | 500 | 1.8682 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
LoneStriker/Open_Gpt4_8x7B_v0.2-3.0bpw-h6-exl2
LoneStriker
2024-01-11T17:47:29Z
7
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "merge", "moe", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T16:12:00Z
--- license: apache-2.0 tags: - merge - moe --- Open_Gpt4_v0.2 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/T7QKB0fKNHQvNqAjm8zrH.jpeg) This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-8x7b-v0.2 with MixtralOrochi8x7B being the Base model. I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct, Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊 This is the second iteration of this model, using better models in the merger to improve performance (hopefully). Base model: - https://huggingface.co/smelborp/MixtralOrochi8x7B Merged models: - https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 - https://huggingface.co/jondurbin/bagel-8x7b-v0.2 Instruct template: Alpaca Merger config: ``` models: - model: Mixtral-8x7B-Instruct-v0.1 parameters: density: .5 weight: .7 - model: bagel-8x7b-v0.2 parameters: density: .5 weight: 1 merge_method: ties base_model: MixtralOrochi8x7B parameters: normalize: true int8_mask: true dtype: float16 ```
jysssacc/bloomz-560m_IA3_lr0.005_bs4_epoch5_wd0.01
jysssacc
2024-01-11T17:47:02Z
2
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2024-01-11T17:45:50Z
--- license: bigscience-bloom-rail-1.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/bloomz-560m model-index: - name: bloomz-560m_IA3_lr0.005_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloomz-560m_IA3_lr0.005_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4255 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.0978 | 1.0 | 157 | 3.3881 | | 3.4201 | 2.0 | 314 | 3.2714 | | 3.1531 | 3.0 | 471 | 3.2386 | | 2.7945 | 4.0 | 628 | 3.3271 | | 2.557 | 5.0 | 785 | 3.4255 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
blindsolitaire/Alpha-Prompt
blindsolitaire
2024-01-11T17:41:18Z
0
0
null
[ "tensorboard", "license:apache-2.0", "region:us" ]
null
2024-01-11T10:19:06Z
--- license: apache-2.0 --- This is a 13-B LORA trained on https://huggingface.co/Gryphe/MythoMax-L2-13b to generate SD prompts from short descriptions. It generate verbose prompts which can result in better images This LORA is co-developed with https://huggingface.co/TwoAbove ## Examples: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6432e3c34521083b9d27bb8a/ZuPcIDgkMa1Hda-EqKjzb.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6432e3c34521083b9d27bb8a/jFseOs9vjX_0ScE7xKLY4.png) ## Usage: Simplest way to test this LORA is to download it an use with the [text-generation-webui](https://github.com/oobabooga/text-generation-webui). We also suggest suggest using this character for better results. (credits: altoiddealer) ``` {{user}}: isometric coffee shop {{char}}: isometric model of a coffee shop with tables and stools, 3d fantasy cute house, intricate dimmed gold brown color tone, inspired by Miao Fu, miniature action figure, listing image, inside a glass box, bakery, inspired by Hideyuki Kikuchi {{user}}: Cthulhu cereal advertisement {{char}}: product photography of box of Cthulhu cereal with fish taste, two sad displeased crying kids eating bowl of cereal, box of Cthulhoos depicting an angry Cthulhu on the box, whimsical, commercial, magazine advertisement {{user}}: anthropomorphic owl bard {{char}}: full portrait of an anthropomorphic white owl, bard, human hands with gloves and a purple magic glow, big wings in the back, peacock motif on one wing, serious face, dnd, character design, digital art, intricate, detailed environment, fantasy {{user}}: professional photo of coffee beans {{char}}: professional macro photography of coffee beans {{user}}: illustration of girl riding a turtle {{char}}: a little adventurer riding a giant turtle in the jungle, children book illustration, playful, in the style of Francoise Seignobosc {{user}}: cinematic photo of Jim Carrey as The Joker {{char}}: Jim Carrey dressed as The Joker, cinematic light, detailed face, clown makeup, smile, cinematic style {{user}}: pixar grumpy old man {{char}}: enraged old man, face red with rage, fury. Humble house sidewalk background, animation pixar, cartoon, portrait, unreal engine, natural lighting, adorable {{user}}: hazy photo of man praying {{char}}: man in prayer, factured face close-up, haze filter, fisheye, vanishing point, vignetting, caustics, portrait, Masterpiece, soft lighting, high detail, amazing quality, saturated, light in contrast, trending on artstation, sharp focus, studio photo, intricate details, highly detailed {{user}}: simple watercolor painting of a dog {{char}}: simple watercolor painting of dachshund dog, simple white background {{user}}: glistening lioness in tar {{char}}: A slimy gooey black latex lioness, coated in glossy, tar-like goo that drips of its body, its body smooth and shiny as it drips, outside, nature, tar fluid, intricite and detailed digital art, painterly style, visible brushstrokes, textured brushstrokes, character concept art by Logan Preshaw and Johan Messely {{user}}: bubbles containing miniature worlds {{char}}: bubbles containing miniature worlds, extreme closeup, macro photography, moon, moonlight, stars, by ron arad, craola, mystical, otherworldly, magic, photorealism, volumetric lighting, beautiful, rich deep colors masterpiece, sharp focus, ultra detailed {{user}}: light flowing through desert {{char}}: a river of light flowing through the desert, fantasy, high contrast colors, epic detail, fantastic lighting, upscale {{user}}: Deadpool reading newspaper {{char}}: surreal, abstract, dark, fantasy, dichromatic, cinematic shot of Deadpool reading a newspaper in a classic diner, zoom lens, dynamic pose, action shot, cinematic lighting, highly detailed, intricate {{user}}: octopus chef {{char}}: a charming scene in an octopus' underwater kitchen, where the octopus, dressed in a chef's hat and apron, expertly uses its many arms to prepare a feast of seashell stew and kelp salad {{user}}: watercolor portrait of "Hoggle" {{char}}: watercolor portrait of "Hoggle from "Labyrinth" the movie, by Brian Froud and Jim Henson, mid-brown and green tones, loveable and a little strange and dark {{user}}: happy dog {{char}}: Golden Retriever with tongue sticking out, tail wagging {{user}}: neon punk sunset {{char}}: a wooden roundtable with wine and cheese in the foreground, a beautiful landscape in the background with grass hills and vineyards, cyberpunk aesthetic, neon punk style {{user}}: modern real state listing {{char}}: photo of a small cozy modern house in red woods on a mountain, solar panels, garage, driveway, great view, sunshine {{user}}: skull surrounded by butterflies {{char}}: a charming and hyper-realistic image of a skull, adorned with soft and cheerful flock of butterflies, hypnotic and uplifting atmosphere, masterfully rendered skull with intricate details, filled with fluttering and vibrant coloured butterflies, Rosanna Jones, Nick Brandt, highly detailed, natural outdoor environment, UHD resolution. ``` ## Credits: https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts https://huggingface.co/datasets/MadVoyager/stable_diffusion_instructional_dataset https://huggingface.co/datasets/lusstta/stable_diffusion_instructional_dataset https://huggingface.co/datasets/poloclub/diffusiondb
hamzamushtaq12/my_awesome_wnut_model
hamzamushtaq12
2024-01-11T17:39:52Z
86
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-10T17:12:01Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: my_awesome_wnut_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2683 - Precision: 0.6176 - Recall: 0.3114 - F1: 0.4140 - Accuracy: 0.9419 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2824 | 0.5947 | 0.2475 | 0.3495 | 0.9390 | | No log | 2.0 | 426 | 0.2683 | 0.6176 | 0.3114 | 0.4140 | 0.9419 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Tokenizers 0.15.0
LoneStriker/Open_Gpt4_8x7B_v0.2-8.0bpw-h8-exl2
LoneStriker
2024-01-11T17:39:36Z
7
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "merge", "moe", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T17:20:21Z
--- license: apache-2.0 tags: - merge - moe --- Open_Gpt4_v0.2 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/T7QKB0fKNHQvNqAjm8zrH.jpeg) This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-8x7b-v0.2 with MixtralOrochi8x7B being the Base model. I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct, Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊 This is the second iteration of this model, using better models in the merger to improve performance (hopefully). Base model: - https://huggingface.co/smelborp/MixtralOrochi8x7B Merged models: - https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 - https://huggingface.co/jondurbin/bagel-8x7b-v0.2 Instruct template: Alpaca Merger config: ``` models: - model: Mixtral-8x7B-Instruct-v0.1 parameters: density: .5 weight: .7 - model: bagel-8x7b-v0.2 parameters: density: .5 weight: 1 merge_method: ties base_model: MixtralOrochi8x7B parameters: normalize: true int8_mask: true dtype: float16 ```
tirik00/dqn-SpaceInvadersNoFrameskip-v4
tirik00
2024-01-11T17:38:37Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-11T17:38:08Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 257.00 +/- 38.81 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tirik00 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tirik00 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga tirik00 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 100000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
magnifi/llama-cls-ner-mt-chat-v6_epoch_24
magnifi
2024-01-11T17:33:09Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-01-11T17:33:08Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
XinHun/Arpeggio_of_Blue_Steel
XinHun
2024-01-11T17:30:53Z
0
1
null
[ "license:other", "region:us" ]
null
2023-12-22T23:47:53Z
--- license: other license_name: '001' license_link: LICENSE ---
vicgalle/phi-2-contextual
vicgalle
2024-01-11T17:26:01Z
4
5
peft
[ "peft", "safetensors", "dpo", "rag", "dataset:jondurbin/contextual-dpo-v0.1", "base_model:cognitivecomputations/dolphin-2_6-phi-2", "base_model:adapter:cognitivecomputations/dolphin-2_6-phi-2", "license:apache-2.0", "region:us" ]
null
2024-01-11T17:02:00Z
--- library_name: peft base_model: cognitivecomputations/dolphin-2_6-phi-2 license: apache-2.0 datasets: - jondurbin/contextual-dpo-v0.1 tags: - dpo - rag --- # Model Card for Phi-2-Contextual A lightweight model (2.8B) with enhanced RAG capabilities, with lower risk of hallucination. This is a DPO fine-tune of the Phi-2 architecture (in particular, dolphin-2_6-phi-2) over the dataset https://huggingface.co/datasets/jondurbin/contextual-dpo-v0.1. ![phi-2-contextual](./mixtureofloras_an_illustration_of_a_mechanical_dolphin_consulti_e0fc7b52-f504-48d9-ab78-1db58bd4167e.png) ## Usage Load the model as ``` model = AutoModelForCausalLM.from_pretrained( "vicgalle/phi-2-contextual", torch_dtype="auto", load_in_4bit=True, trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained( "cognitivecomputations/dolphin-2_6-phi-2", trust_remote_code=True ) ``` and use the following prompt template. ## Prompt format ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` with the following definitions: - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set ## Examples ### Reference-based Q&A ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://en.wikipedia.org/wiki/Rottnest_Island ENDCONTEXT Rottnest Island (Nyungar: Wadjemup), often colloquially referred to as "Rotto", is a 19-square-kilometre (7.3 sq mi) island off the coast of Western Australia, located 18 kilometres (11 mi) west of Fremantle. A sandy, low-lying island formed on a base of aeolianite limestone, Rottnest is an A-class reserve, the highest level of protection afforded to public land. Together with Garden Island, Rottnest Island is a remnant of Pleistocene dune ridges.[7] Along with several other islands, Rottnest became separated from the mainland around 7,000 years ago, when sea levels rose; the traditional Noongar name for the island is Wadjemup, which means "place across the water where the spirits are".[8] Human artefacts have been found on the island dating back at least 30,000 years, but visitation and habitation of the island by the Noongar people appears to have ceased following its separation from the mainland. The island was first documented by Willem de Vlamingh in 1696, who called it 't Eylandt 't Rottenest ("Rats' Nest Island") after the quokka population. Following establishment of the Swan River Colony (now Perth) in 1829, the island was initially used by British settlers for agricultural purposes, and a permanent settlement was built in Thomson Bay. From 1838 to 1931, Rottnest Island was also used as a prison and forced labour camp for over 3,600 Aboriginal people, who were subjected to extremely harsh conditions on the island.[9][10] Other historical uses of the island include as a military site, and for internment camps housing enemy aliens. Many of the island's buildings date from the colonial period, often made from locally quarried limestone, and are now used as accommodation for holidays. ENDINPUT BEGININSTRUCTION In which years was the island used as a jail? ENDINSTRUCTION ``` produces ``` The island was used as a jail from 1838 to 1931. ``` ### Structured output generation ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT The rock parrot (Neophema petrophila) is a species of grass parrot native to Australia. Described by John Gould in 1841, it is a small parrot 22 to 24 cm (8+3⁄4 to 9+1⁄2 in) long and weighing 50 to 60 g (1+3⁄4 to 2 oz) with predominantly olive-brown upperparts and more yellowish underparts. Its head is olive with light blue forecheeks and lores, and a dark blue frontal band line across the crown with lighter blue above and below. The sexes are similar in appearance, although the female tends to have a duller frontal band and less blue on the face. The preferred habitats are rocky islands and coastal dune areas from Lake Alexandrina in southeastern South Australia westwards across coastal South and Western Australia to Shark Bay. ENDINPUT BEGININSTRUCTION Generate a list with all the mentioned locations ENDINSTRUCTION ``` produces ``` The mentioned locations in the provided text are: 1. Lake Alexandrina in southeastern South Australia 2. Coastal dune areas extending from Lake Alexandrina to Shark Bay in Western Australia. ``` ### Multiple contexts ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://wikipedia.com/Lego ENDCONTEXT Since 1963, Lego pieces have been manufactured from acrylonitrile butadiene styrene (ABS).[19][39] As of September 2008, Lego engineers use the NX CAD/CAM/CAE PLM software suite to model the elements. The software allows the parts to be optimised by way of mould flow and stress analysis. Prototype moulds are sometimes built before the design is committed to mass production. The ABS plastic is heated to 232 °C (450 °F) until it reaches a dough-like consistency. It is then injected into the moulds using forces of between 25 and 150 tonnes and takes approximately 15 seconds to cool. The moulds are permitted a tolerance of up to twenty micrometres to ensure the bricks remain connected.[42] Human inspectors check the output of the moulds to eliminate significant variations in colour or thickness. According to the Lego Group, about eighteen bricks out of every million fail to meet the standard required.[46] Lego factories recycle all but about 1 percent of their plastic waste from the manufacturing process. If the plastic cannot be re-used in Lego bricks, it is processed and sold on to industries that can make use of it.[47][48] Lego, in 2018, set a self-imposed 2030 deadline to find a more eco-friendly alternative to the ABS plastic.[49] ENDINPUT BEGININPUT BEGINCONTEXT date: 2021-01-02 url: https://wikipedia.com/Rottnest ENDCONTEXT Rottnest Island (Nyungar: Wadjemup), often colloquially referred to as "Rotto", is a 19-square-kilometre (7.3 sq mi) island off the coast of Western Australia, located 18 kilometres (11 mi) west of Fremantle. A sandy, low-lying island formed on a base of aeolianite limestone, Rottnest is an A-class reserve, the highest level of protection afforded to public land. Together with Garden Island, Rottnest Island is a remnant of Pleistocene dune ridges.[7] Along with several other islands, Rottnest became separated from the mainland around 7,000 years ago, when sea levels rose; the traditional Noongar name for the island is Wadjemup, which means "place across the water where the spirits are".[8] Human artefacts have been found on the island dating back at least 30,000 years, but visitation and habitation of the island by the Noongar people appears to have ceased following its separation from the mainland. The island was first documented by Willem de Vlamingh in 1696, who called it 't Eylandt 't Rottenest ("Rats' Nest Island") after the quokka population. Following establishment of the Swan River Colony (now Perth) in 1829, the island was initially used by British settlers for agricultural purposes, and a permanent settlement was built in Thomson Bay. From 1838 to 1931, Rottnest Island was also used as a prison and forced labour camp for over 3,600 Aboriginal people, who were subjected to extremely harsh conditions on the island.[9][10] Other historical uses of the island include as a military site, and for internment camps housing enemy aliens. Many of the island's buildings date from the colonial period, often made from locally quarried limestone, and are now used as accommodation for holidays. ENDINPUT BEGININSTRUCTION What is threshold to ensure the pieces match? ENDINSTRUCTION ``` produces ``` The threshold to ensure the pieces match refers to the tolerance of up to twenty micrometres in the Lego brick manufacturing process. According to the provided context, this tolerance is set by the Lego Group and it helps to ensure that the bricks remain connected. Reference(s): date: 2021-01-01 url: https://wikipedia.com/Lego ``` ### Framework versions - PEFT 0.7.1
AleRams/test-trainer
AleRams
2024-01-11T17:22:11Z
93
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-07T15:05:38Z
--- license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: test-trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-trainer This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2045 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6957 | 0.38 | 100 | 1.1625 | | 1.1736 | 0.76 | 200 | 1.1407 | | 0.8794 | 1.14 | 300 | 1.1247 | | 0.8277 | 1.52 | 400 | 1.1632 | | 0.8723 | 1.9 | 500 | 1.1683 | | 0.7154 | 2.28 | 600 | 1.1946 | | 0.5999 | 2.66 | 700 | 1.2045 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
jysssacc/bloomz-560m_adalora_lr0.0005_bs4_epoch5_wd0.01
jysssacc
2024-01-11T17:20:46Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2024-01-11T17:19:15Z
--- license: bigscience-bloom-rail-1.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/bloomz-560m model-index: - name: bloomz-560m_adalora_lr0.0005_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloomz-560m_adalora_lr0.0005_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.8762 | 1.0 | 157 | 3.6749 | | 3.5139 | 2.0 | 314 | 3.3030 | | 3.3203 | 3.0 | 471 | 3.2798 | | 3.1488 | 4.0 | 628 | 3.2836 | | 3.0043 | 5.0 | 785 | 3.3209 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
LoneStriker/Open_Gpt4_8x7B_v0.2-6.0bpw-h6-exl2
LoneStriker
2024-01-11T17:17:50Z
7
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "merge", "moe", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T17:03:33Z
--- license: apache-2.0 tags: - merge - moe --- Open_Gpt4_v0.2 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/T7QKB0fKNHQvNqAjm8zrH.jpeg) This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-8x7b-v0.2 with MixtralOrochi8x7B being the Base model. I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct, Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊 This is the second iteration of this model, using better models in the merger to improve performance (hopefully). Base model: - https://huggingface.co/smelborp/MixtralOrochi8x7B Merged models: - https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 - https://huggingface.co/jondurbin/bagel-8x7b-v0.2 Instruct template: Alpaca Merger config: ``` models: - model: Mixtral-8x7B-Instruct-v0.1 parameters: density: .5 weight: .7 - model: bagel-8x7b-v0.2 parameters: density: .5 weight: 1 merge_method: ties base_model: MixtralOrochi8x7B parameters: normalize: true int8_mask: true dtype: float16 ```
jysssacc/mt0-base_adalora_lr5e-05_bs4_epoch5_wd0.01
jysssacc
2024-01-11T17:06:08Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/mt0-base", "base_model:adapter:bigscience/mt0-base", "license:apache-2.0", "region:us" ]
null
2024-01-11T17:00:25Z
--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/mt0-base model-index: - name: mt0-base_adalora_lr5e-05_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt0-base_adalora_lr5e-05_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6219 | 1.0 | 157 | 1.3075 | | 1.4398 | 2.0 | 314 | 1.1082 | | 1.2806 | 3.0 | 471 | 0.6193 | | 0.4246 | 4.0 | 628 | 0.1525 | | 0.2452 | 5.0 | 785 | 0.1184 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
alirzb/S5_M1_f5_AST_42808593
alirzb
2024-01-11T16:59:04Z
146
0
transformers
[ "transformers", "pytorch", "audio-spectrogram-transformer", "audio-classification", "generated_from_trainer", "base_model:MIT/ast-finetuned-audioset-10-10-0.4593", "base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593", "license:bsd-3-clause", "endpoints_compatible", "region:us" ]
audio-classification
2024-01-11T13:03:46Z
--- license: bsd-3-clause base_model: MIT/ast-finetuned-audioset-10-10-0.4593 tags: - generated_from_trainer metrics: - accuracy model-index: - name: S5_M1_f5_AST_42808593 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # S5_M1_f5_AST_42808593 This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0145 - Accuracy: 0.9976 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0012 | 1.0 | 368 | 0.0114 | 0.9976 | | 0.0034 | 2.0 | 737 | 0.0147 | 0.9976 | | 0.0001 | 3.0 | 1105 | 0.0085 | 0.9968 | | 0.0 | 4.0 | 1474 | 0.0139 | 0.9976 | | 0.0 | 4.99 | 1840 | 0.0145 | 0.9976 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2 - Datasets 2.16.1 - Tokenizers 0.13.3
klyang/MentaLLaMA-chat-7B-hf
klyang
2024-01-11T16:58:53Z
145
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "medical", "en", "arxiv:2309.13567", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T16:45:42Z
--- license: mit language: - en metrics: - f1 tags: - medical --- # Introduction MentaLLaMA-chat-7B is part of the [MentaLLaMA](https://github.com/SteveKGYang/MentalLLaMA) project, the first open-source large language model (LLM) series for interpretable mental health analysis with instruction-following capability. This model is finetuned based on the Meta LLaMA2-chat-7B foundation model and the full IMHI instruction tuning data. The model is expected to make complex mental health analysis for various mental health conditions and give reliable explanations for each of its predictions. It is fine-tuned on the IMHI dataset with 75K high-quality natural language instructions to boost its performance in downstream tasks. We perform a comprehensive evaluation on the IMHI benchmark with 20K test samples. The result shows that MentalLLaMA approaches state-of-the-art discriminative methods in correctness and generates high-quality explanations. # Ethical Consideration Although experiments on MentaLLaMA show promising performance on interpretable mental health analysis, we stress that all predicted results and generated explanations should only used for non-clinical research, and the help-seeker should get assistance from professional psychiatrists or clinical practitioners. In addition, recent studies have indicated LLMs may introduce some potential bias, such as gender gaps. Meanwhile, some incorrect prediction results, inappropriate explanations, and over-generalization also illustrate the potential risks of current LLMs. Therefore, there are still many challenges in applying the model to real-scenario mental health monitoring systems. ## Other Models in MentaLLaMA In addition to MentaLLaMA-chat-7B, the MentaLLaMA project includes another model: MentaLLaMA-chat-13B, MentalBART, MentalT5. - **MentaLLaMA-chat-13B**: This model is finetuned based on the Meta LLaMA2-chat-13B foundation model and the full IMHI instruction tuning data. The training data covers 10 mental health analysis tasks. - **MentalBART**: This model is finetuned based on the BART-large foundation model and the full IMHI-completion data. The training data covers 10 mental health analysis tasks. This model doesn't have instruction-following ability but is more lightweight and performs well in interpretable mental health analysis in a completion-based manner. - **MentalT5**: This model is finetuned based on the T5-large foundation model and the full IMHI-completion data. The training data covers 10 mental health analysis tasks. This model doesn't have instruction-following ability but is more lightweight and performs well in interpretable mental health analysis in a completion-based manner. ## Usage You can use the MentaLLaMA-chat-7B model in your Python project with the Hugging Face Transformers library. Here is a simple example of how to load the model: ```python from transformers import LlamaTokenizer, LlamaForCausalLM tokenizer = LlamaTokenizer.from_pretrained('klyang/MentaLLaMA-chat-7B') model = LlamaForCausalLM.from_pretrained('klyang/MentaLLaMA-chat-7B', device_map='auto') ``` In this example, LlamaTokenizer is used to load the tokenizer, and LlamaForCausalLM is used to load the model. The `device_map='auto'` argument is used to automatically use the GPU if it's available. ## License MentaLLaMA-chat-7B is licensed under MIT. For more details, please see the MIT file. ## Citation If you use MentaLLaMA-chat-7B in your work, please cite the our paper: ```bibtex @misc{yang2023mentalllama, title={MentalLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models}, author={Kailai Yang and Tianlin Zhang and Ziyan Kuang and Qianqian Xie and Sophia Ananiadou}, year={2023}, eprint={2309.13567}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
jysssacc/bloomz-560m_lora_lr0.0005_bs4_epoch5_wd0.01
jysssacc
2024-01-11T16:55:22Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2024-01-11T16:54:07Z
--- license: bigscience-bloom-rail-1.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/bloomz-560m model-index: - name: bloomz-560m_lora_lr0.0005_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloomz-560m_lora_lr0.0005_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.1047 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.9002 | 1.0 | 157 | 3.2934 | | 3.2942 | 2.0 | 314 | 3.3158 | | 2.9969 | 3.0 | 471 | 3.3750 | | 2.5806 | 4.0 | 628 | 3.6261 | | 2.1226 | 5.0 | 785 | 4.1047 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
jysssacc/opt-350m_lora_lr5e-05_bs4_epoch5_wd0.01
jysssacc
2024-01-11T16:52:45Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "license:other", "region:us" ]
null
2024-01-10T16:45:29Z
--- license: other library_name: peft tags: - generated_from_trainer base_model: facebook/opt-350m model-index: - name: opt-350m_lora_lr5e-05_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m_lora_lr5e-05_bs4_epoch5_wd0.01 This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.9932 | 1.0 | 157 | 3.5348 | | 3.5358 | 2.0 | 314 | 3.3980 | | 3.4521 | 3.0 | 471 | 3.3849 | | 3.3953 | 4.0 | 628 | 3.3950 | | 3.3536 | 5.0 | 785 | 3.3707 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
jysssacc/627_roberta-base_lora_lr5e-05_bs4_epoch5_wd0.01
jysssacc
2024-01-11T16:50:32Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2024-01-11T16:45:04Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: roberta-base model-index: - name: 627_roberta-base_lora_lr5e-05_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 627_roberta-base_lora_lr5e-05_bs4_epoch5_wd0.01 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9637 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 19.1336 | 1.0 | 157 | 12.6299 | | 5.0823 | 2.0 | 314 | 4.0745 | | 3.8205 | 3.0 | 471 | 2.6807 | | 2.6215 | 4.0 | 628 | 2.1117 | | 2.336 | 5.0 | 785 | 1.9637 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
FeiiYin/lora-trained-xl-audi2
FeiiYin
2024-01-11T16:39:09Z
2
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-11T16:30:40Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'A photo of sks car on the street' output: url: "image_0.png" - text: 'A photo of sks car on the street' output: url: "image_1.png" - text: 'A photo of sks car on the street' output: url: "image_2.png" - text: 'A photo of sks car on the street' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of sks car license: openrail++ --- # SDXL LoRA DreamBooth - FeiiYin/lora-trained-xl-audi2 <Gallery /> ## Model description These are FeiiYin/lora-trained-xl-audi2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of sks car to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](FeiiYin/lora-trained-xl-audi2/tree/main) them in the Files & versions tab.
jysssacc/bloomz-560m_IA3_lr0.0005_bs4_epoch5_wd0.01
jysssacc
2024-01-11T16:30:48Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2024-01-11T16:29:38Z
--- license: bigscience-bloom-rail-1.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/bloomz-560m model-index: - name: bloomz-560m_IA3_lr0.0005_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloomz-560m_IA3_lr0.0005_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3373 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.2839 | 1.0 | 157 | 3.8408 | | 3.9115 | 2.0 | 314 | 3.5609 | | 3.6916 | 3.0 | 471 | 3.4080 | | 3.4991 | 4.0 | 628 | 3.3498 | | 3.4558 | 5.0 | 785 | 3.3373 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
thepriyansh/transformers-qa-2
thepriyansh
2024-01-11T16:25:11Z
46
0
transformers
[ "transformers", "tf", "roberta", "question-answering", "generated_from_keras_callback", "base_model:distilbert/distilroberta-base", "base_model:finetune:distilbert/distilroberta-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-01-11T16:24:40Z
--- license: apache-2.0 base_model: distilroberta-base tags: - generated_from_keras_callback model-index: - name: transformers-qa-2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # transformers-qa-2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9500 - Validation Loss: 1.1014 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.4034 | 1.1588 | 0 | | 0.9500 | 1.1014 | 1 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.0
rsaketh02/sak
rsaketh02
2024-01-11T16:23:11Z
43
0
transformers
[ "transformers", "tf", "distilbert", "token-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-11T16:15:04Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: rsaketh02/sak results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # rsaketh02/sak This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0263 - Validation Loss: 0.0494 - Train Precision: 0.9302 - Train Recall: 0.9394 - Train F1: 0.9348 - Train Accuracy: 0.9873 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.1686 | 0.0562 | 0.9098 | 0.9256 | 0.9177 | 0.9842 | 0 | | 0.0418 | 0.0494 | 0.9270 | 0.9362 | 0.9316 | 0.9866 | 1 | | 0.0263 | 0.0494 | 0.9302 | 0.9394 | 0.9348 | 0.9873 | 2 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.0
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_GrounTruth_withPrompt_Seed102
behzadnet
2024-01-11T16:22:36Z
1
0
peft
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2024-01-11T16:22:30Z
--- library_name: peft base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0 ## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.7.0.dev0
MaziyarPanahi/openbuddy-mistral-7b-v13-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-11T16:20:43Z
24
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "OpenBuddy/openbuddy-mistral-7b-v13", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T16:15:18Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - OpenBuddy/openbuddy-mistral-7b-v13 --- # openbuddy-mistral-7b-v13-Mistral-7B-Instruct-v0.2-slerp openbuddy-mistral-7b-v13-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [OpenBuddy/openbuddy-mistral-7b-v13](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: OpenBuddy/openbuddy-mistral-7b-v13 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/openbuddy-mistral-7b-v13-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
joshnguyen/mformer-fairness
joshnguyen
2024-01-11T16:17:58Z
482
3
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-15T03:06:50Z
--- license: mit language: - en library_name: transformers ---
joshnguyen/mformer-loyalty
joshnguyen
2024-01-11T16:17:45Z
560
2
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-09-15T03:07:02Z
--- license: mit language: - en library_name: transformers ---
abhinav-32/distilbert-base-uncased-finetuned-emotion
abhinav-32
2024-01-11T16:10:32Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-15T12:47:38Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.938 - name: F1 type: f1 value: 0.938136684358856 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1460 - Accuracy: 0.938 - F1: 0.9381 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.1729 | 1.0 | 250 | 0.1629 | 0.93 | 0.9303 | | 0.1087 | 2.0 | 500 | 0.1460 | 0.938 | 0.9381 | ### Framework versions - Transformers 4.36.1 - Pytorch 2.1.1 - Datasets 2.15.0 - Tokenizers 0.15.0
jysssacc/bloomz-560m_adalora_lr5e-05_bs4_epoch5_wd0.01
jysssacc
2024-01-11T16:06:38Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2024-01-11T16:05:10Z
--- license: bigscience-bloom-rail-1.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/bloomz-560m model-index: - name: bloomz-560m_adalora_lr5e-05_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloomz-560m_adalora_lr5e-05_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.0196 | 1.0 | 157 | 4.5896 | | 4.7191 | 2.0 | 314 | 4.3050 | | 4.4762 | 3.0 | 471 | 3.7209 | | 3.6922 | 4.0 | 628 | 3.4267 | | 3.6017 | 5.0 | 785 | 3.3996 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
decruz07/kellemar-DPO-7B-e
decruz07
2024-01-11T16:02:38Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T15:28:52Z
--- license: apache-2.0 --- # Model Card for decruz07/kellemar-DPO-7B-e <!-- Provide a quick summary of what the model is/does. --> Learning Rate: 5e-5, steps 300 ## Model Details Created with beta = 0.05 ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** @decruz - **Funded by [optional]:** my full-time job - **Finetuned from model [optional]:** teknium/OpenHermes-2.5-Mistral-7B ## Uses You can use this for basic inference. You could probably finetune with this if you want to. ## How to Get Started with the Model You can create a space out of this, or use basic python code to call the model directly and make inferences to it. [More Information Needed] ## Training Details The following was used: `training_args = TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, gradient_checkpointing=True, learning_rate=5e-5, lr_scheduler_type="cosine", max_steps=200, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, max_prompt_length=1024, max_length=1536, )` ### Training Data This was trained with https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs ### Training Procedure Trained with Labonne's Google Colab Notebook on Finetuning Mistral 7B with DPO. ## Model Card Authors [optional] @decruz ## Model Card Contact @decruz on X/Twitter
Federic/lora-fine-tuning-llama2-SQL-lora-100-dataset-size
Federic
2024-01-11T15:56:53Z
0
0
null
[ "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:finetune:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-01-11T14:08:45Z
--- base_model: meta-llama/Llama-2-7b-chat-hf tags: - trl - sft - generated_from_trainer model-index: - name: lora-fine-tuning-llama2-SQL-lora-100-dataset-size results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora-fine-tuning-llama2-SQL-lora-100-dataset-size This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6509 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3966 | 0.54 | 7 | 2.1651 | | 1.0786 | 1.08 | 14 | 0.9747 | | 0.6613 | 1.62 | 21 | 0.7750 | | 0.6943 | 2.15 | 28 | 0.6982 | | 0.4302 | 2.69 | 35 | 0.6674 | | 0.4016 | 3.23 | 42 | 0.6505 | | 0.624 | 3.77 | 49 | 0.6498 | | 0.5111 | 4.31 | 56 | 0.6542 | | 0.4381 | 4.85 | 63 | 0.6509 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
MKS3099/roberta-base-finetuned-stationary-chatgptDS
MKS3099
2024-01-11T15:53:13Z
94
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-11T15:52:37Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: roberta-base-finetuned-stationary-chatgptDS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-stationary-chatgptDS This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6459 - Accuracy: 0.7367 - F1: 0.7370 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6374 | 1.0 | 75 | 0.6259 | 0.665 | 0.5312 | | 0.5898 | 2.0 | 150 | 0.5705 | 0.7067 | 0.6957 | | 0.5349 | 3.0 | 225 | 0.5607 | 0.725 | 0.6971 | | 0.4875 | 4.0 | 300 | 0.6014 | 0.6717 | 0.6807 | | 0.4353 | 5.0 | 375 | 0.5648 | 0.73 | 0.7188 | | 0.414 | 6.0 | 450 | 0.6210 | 0.7383 | 0.7044 | | 0.3587 | 7.0 | 525 | 0.6130 | 0.7367 | 0.7322 | | 0.299 | 8.0 | 600 | 0.6070 | 0.7333 | 0.7319 | | 0.2847 | 9.0 | 675 | 0.6725 | 0.7633 | 0.7519 | | 0.268 | 10.0 | 750 | 0.6459 | 0.7367 | 0.7370 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-11T15:50:13Z
23
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Azure99/blossom-v3-mistral-7b", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T15:45:19Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Azure99/blossom-v3-mistral-7b --- # blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.2-slerp blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Azure99/blossom-v3-mistral-7b](https://huggingface.co/Azure99/blossom-v3-mistral-7b) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Azure99/blossom-v3-mistral-7b layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/blossom-v3-mistral-7b-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jysssacc/627_roberta-base_adalora_lr5e-06_bs4_epoch5_wd0.01
jysssacc
2024-01-11T15:46:25Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2024-01-11T15:39:33Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: roberta-base model-index: - name: 627_roberta-base_adalora_lr5e-06_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 627_roberta-base_adalora_lr5e-06_bs4_epoch5_wd0.01 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 22.6694 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 21.0092 | 1.0 | 157 | 23.2143 | | 20.8639 | 2.0 | 314 | 23.1323 | | 20.7846 | 3.0 | 471 | 22.9716 | | 20.6425 | 4.0 | 628 | 22.7547 | | 20.3443 | 5.0 | 785 | 22.6694 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/mindy-7b-v2-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-11T15:39:09Z
25
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "mindy-labs/mindy-7b-v2", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T15:34:04Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - mindy-labs/mindy-7b-v2 --- # mindy-7b-v2-Mistral-7B-Instruct-v0.2-slerp mindy-7b-v2-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [mindy-labs/mindy-7b-v2](https://huggingface.co/mindy-labs/mindy-7b-v2) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: mindy-labs/mindy-7b-v2 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/mindy-7b-v2-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
TheBloke/phixtral-4x2_8-GPTQ
TheBloke
2024-01-11T15:36:02Z
43
24
transformers
[ "transformers", "safetensors", "phi-msft", "text-generation", "moe", "nlp", "code", "cognitivecomputations/dolphin-2_6-phi-2", "lxuechen/phi-2-dpo", "Yhyu13/phi-2-sft-dpo-gpt4_en-ep1", "mrm8488/phi-2-coder", "conversational", "custom_code", "en", "base_model:mlabonne/phixtral-4x2_8", "base_model:quantized:mlabonne/phixtral-4x2_8", "license:mit", "autotrain_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-01-11T14:38:56Z
--- base_model: mlabonne/phixtral-4x2_8 inference: false language: - en license: mit license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE model_creator: Maxime Labonne model_name: Phixtral 4X2 8 model_type: phi-msft pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: TheBloke tags: - moe - nlp - code - cognitivecomputations/dolphin-2_6-phi-2 - lxuechen/phi-2-dpo - Yhyu13/phi-2-sft-dpo-gpt4_en-ep1 - mrm8488/phi-2-coder --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Phixtral 4X2 8 - GPTQ - Model creator: [Maxime Labonne](https://huggingface.co/mlabonne) - Original model: [Phixtral 4X2 8](https://huggingface.co/mlabonne/phixtral-4x2_8) <!-- description start --> # Description This repo contains GPTQ model files for [Maxime Labonne's Phixtral 4X2 8](https://huggingface.co/mlabonne/phixtral-4x2_8). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- repositories-available start --> ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/phixtral-4x2_8-GPTQ) * [Maxime Labonne's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mlabonne/phixtral-4x2_8) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- README_GPTQ.md-compatible clients start --> ## Known compatible clients / servers GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models. These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; if you know of others, please let me know! <!-- README_GPTQ.md-compatible clients end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files, and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [main](https://huggingface.co/TheBloke/phixtral-4x2_8-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 4.46 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/phixtral-4x2_8-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 4.90 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. | | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/phixtral-4x2_8-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 8.09 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. | | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/phixtral-4x2_8-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 8.26 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. | | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/phixtral-4x2_8-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 8.79 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. | | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/phixtral-4x2_8-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 4.61 GB | No | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download, including from branches ### In text-generation-webui To download from the `main` branch, enter `TheBloke/phixtral-4x2_8-GPTQ` in the "Download model" box. To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/phixtral-4x2_8-GPTQ:gptq-4bit-32g-actorder_True` ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `phixtral-4x2_8-GPTQ`: ```shell mkdir phixtral-4x2_8-GPTQ huggingface-cli download TheBloke/phixtral-4x2_8-GPTQ --local-dir phixtral-4x2_8-GPTQ --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir phixtral-4x2_8-GPTQ huggingface-cli download TheBloke/phixtral-4x2_8-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir phixtral-4x2_8-GPTQ --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir phixtral-4x2_8-GPTQ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/phixtral-4x2_8-GPTQ --local-dir phixtral-4x2_8-GPTQ --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> ### With `git` (**not** recommended) To clone a specific branch with `git`, use a command like this: ```shell git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/phixtral-4x2_8-GPTQ ``` Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.) <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/phixtral-4x2_8-GPTQ`. - To download from a specific branch, enter for example `TheBloke/phixtral-4x2_8-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `phixtral-4x2_8-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-tgi start --> ## Serving this model from Text Generation Inference (TGI) It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/phixtral-4x2_8-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''{prompt} ''' client = InferenceClient(endpoint_url) response = client.text_generation( prompt_template, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(f"Model output: {response}") ``` <!-- README_GPTQ.md-use-from-tgi end --> <!-- README_GPTQ.md-use-from-python start --> ## Python code example: inference from this GPTQ model ### Install the necessary packages Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install --upgrade transformers optimum # If using PyTorch 2.1 + CUDA 12.x: pip3 install --upgrade auto-gptq # or, if using PyTorch 2.1 + CUDA 11.x: pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ ``` If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ git checkout v0.5.1 pip3 install . ``` ### Example Python code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/phixtral-4x2_8-GPTQ" # To use a different branch, change revision # For example: revision="gptq-4bit-32g-actorder_True" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=True, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Write a story about llamas" system_message = "You are a story writing assistant" prompt_template=f'''{prompt} ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly. [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility. For a list of clients/servers, please see "Known compatible clients / servers", above. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Maxime Labonne's Phixtral 4X2 8 ![](https://i.imgur.com/UOb2fvh.jpg) # phixtral-4x2_8 phixtral-4x2_8 is the first Mixure of Experts (MoE) made with four [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) models, inspired by the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) architecture. It performs better than each individual expert. ## 🏆 Evaluation | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |----------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[**phixtral-4x2_8**](https://huggingface.co/mlabonne/phixtral-4x2_8)| **33.91**| **70.44**| **48.78**| **37.68**| **47.7**| |[dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)| 33.12| 69.85| 47.39| 37.2| 46.89| |[phi-2-dpo](https://huggingface.co/lxuechen/phi-2-dpo)| 30.39| 71.68| 50.75| 34.9| 46.93| |[phi-2-sft-dpo-gpt4_en-ep1](https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1)| 30.61| 71.13| 48.74| 35.23| 46.43| |[phi-2-coder](https://huggingface.co/mrm8488/phi-2-coder)| TBD| TBD| TBD| TBD| TBD| |[phi-2](https://huggingface.co/microsoft/phi-2)| 27.98| 70.8| 44.43| 35.21| 44.61| Check [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard) to compare it with other models. ## 🧩 Configuration The model has been made with a custom version of the [mergekit](https://github.com/cg123/mergekit) library (mixtral branch) and the following configuration: ```yaml base_model: cognitivecomputations/dolphin-2_6-phi-2 gate_mode: cheap_embed experts: - source_model: cognitivecomputations/dolphin-2_6-phi-2 positive_prompts: [""] - source_model: lxuechen/phi-2-dpo positive_prompts: [""] - source_model: Yhyu13/phi-2-sft-dpo-gpt4_en-ep1 positive_prompts: [""] - source_model: mrm8488/phi-2-coder positive_prompts: [""] ``` ## 💻 Usage Here's a [Colab notebook](https://colab.research.google.com/drive/1k6C_oJfEKUq0mtuWKisvoeMHxTcIxWRa?usp=sharing) to run Phixtral in 4-bit precision on a free T4 GPU. ```python !pip install -q --upgrade transformers einops accelerate bitsandbytes import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "phixtral-4x2_8" instruction = ''' def print_prime(n): """ Print all primes between 1 and n """ ''' torch.set_default_device("cuda") # Load the model and tokenizer model = AutoModelForCausalLM.from_pretrained( f"mlabonne/{model_name}", torch_dtype="auto", load_in_4bit=True, trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained( f"mlabonne/{model_name}", trust_remote_code=True ) # Tokenize the input string inputs = tokenizer( instruction, return_tensors="pt", return_attention_mask=False ) # Generate text using the model outputs = model.generate(**inputs, max_length=200) # Decode and print the output text = tokenizer.batch_decode(outputs)[0] print(text) ``` Inspired by [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), you can specify the `num_experts_per_tok` and `num_local_experts` in the [`config.json`](https://huggingface.co/mlabonne/phixtral-4x2_8/blob/main/config.json#L26-L27) file (2 and 4 by default). This configuration is automatically loaded in `configuration.py`. [vince62s](https://huggingface.co/vince62s) implemented the MoE inference code in the `modeling_phi.py` file. In particular, see the [MoE class](https://huggingface.co/mlabonne/phixtral-4x2_8/blob/main/modeling_phi.py#L293-L317). ## 🤝 Acknowledgments A special thanks to [vince62s](https://huggingface.co/vince62s) for the inference code and the dynamic configuration of the number of experts. He was very patient and helped me to debug everything. Thanks to [Charles Goddard](https://github.com/cg123) for the [mergekit](https://github.com/cg123/mergekit) library and the implementation of the [MoE for clowns](https://goddard.blog/posts/clown-moe/). Thanks to [ehartford](https://huggingface.co/ehartford), [lxuechen](https://huggingface.co/lxuechen), [Yhyu13](https://huggingface.co/Yhyu13), and [mrm8488](https://huggingface.co/mrm8488) for their fine-tuned phi-2 models.
JCTN/IP-Adapter-FaceID
JCTN
2024-01-11T15:34:10Z
84
2
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "en", "arxiv:2308.06721", "region:us" ]
text-to-image
2024-01-11T15:21:18Z
--- tags: - text-to-image - stable-diffusion language: - en library_name: diffusers --- # IP-Adapter-FaceID Model Card <div align="center"> [**Project Page**](https://ip-adapter.github.io) **|** [**Paper (ArXiv)**](https://arxiv.org/abs/2308.06721) **|** [**Code**](https://github.com/tencent-ailab/IP-Adapter) </div> --- ## Introduction An experimental version of IP-Adapter-FaceID: we use face ID embedding from a face recognition model instead of CLIP image embedding, additionally, we use LoRA to improve ID consistency. IP-Adapter-FaceID can generate various style images conditioned on a face with only text prompts. ![results](./ip-adapter-faceid.jpg) **Update 2023/12/27**: IP-Adapter-FaceID-Plus: face ID embedding (for face ID) + CLIP image embedding (for face structure) <div align="center"> ![results](./faceid-plus.jpg) </div> **Update 2023/12/28**: IP-Adapter-FaceID-PlusV2: face ID embedding (for face ID) + controllable CLIP image embedding (for face structure) You can adjust the weight of the face structure to get different generation! <div align="center"> ![results](./faceid_plusv2.jpg) </div> **Update 2024/01/04**: IP-Adapter-FaceID-SDXL: An experimental SDXL version of IP-Adapter-FaceID <div align="center"> ![results](./sdxl_faceid.jpg) </div> ## Usage ### IP-Adapter-FaceID Firstly, you should use [insightface](https://github.com/deepinsight/insightface) to extract face ID embedding: ```python import cv2 from insightface.app import FaceAnalysis import torch app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) app.prepare(ctx_id=0, det_size=(640, 640)) image = cv2.imread("person.jpg") faces = app.get(image) faceid_embeds = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0) ``` Then, you can generate images conditioned on the face embeddings: ```python import torch from diffusers import StableDiffusionPipeline, DDIMScheduler, AutoencoderKL from PIL import Image from ip_adapter.ip_adapter_faceid import IPAdapterFaceID base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE" vae_model_path = "stabilityai/sd-vae-ft-mse" ip_ckpt = "ip-adapter-faceid_sd15.bin" device = "cuda" noise_scheduler = DDIMScheduler( num_train_timesteps=1000, beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False, steps_offset=1, ) vae = AutoencoderKL.from_pretrained(vae_model_path).to(dtype=torch.float16) pipe = StableDiffusionPipeline.from_pretrained( base_model_path, torch_dtype=torch.float16, scheduler=noise_scheduler, vae=vae, feature_extractor=None, safety_checker=None ) # load ip-adapter ip_model = IPAdapterFaceID(pipe, ip_ckpt, device) # generate image prompt = "photo of a woman in red dress in a garden" negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry" images = ip_model.generate( prompt=prompt, negative_prompt=negative_prompt, faceid_embeds=faceid_embeds, num_samples=4, width=512, height=768, num_inference_steps=30, seed=2023 ) ``` you can also use a normal IP-Adapter and a normal LoRA to load model: ```python import torch from diffusers import StableDiffusionPipeline, DDIMScheduler, AutoencoderKL from PIL import Image from ip_adapter.ip_adapter_faceid_separate import IPAdapterFaceID base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE" vae_model_path = "stabilityai/sd-vae-ft-mse" ip_ckpt = "ip-adapter-faceid_sd15.bin" lora_ckpt = "ip-adapter-faceid_sd15_lora.safetensors" device = "cuda" noise_scheduler = DDIMScheduler( num_train_timesteps=1000, beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False, steps_offset=1, ) vae = AutoencoderKL.from_pretrained(vae_model_path).to(dtype=torch.float16) pipe = StableDiffusionPipeline.from_pretrained( base_model_path, torch_dtype=torch.float16, scheduler=noise_scheduler, vae=vae, feature_extractor=None, safety_checker=None ) # load lora and fuse pipe.load_lora_weights(lora_ckpt) pipe.fuse_lora() # load ip-adapter ip_model = IPAdapterFaceID(pipe, ip_ckpt, device) # generate image prompt = "photo of a woman in red dress in a garden" negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry" images = ip_model.generate( prompt=prompt, negative_prompt=negative_prompt, faceid_embeds=faceid_embeds, num_samples=4, width=512, height=768, num_inference_steps=30, seed=2023 ) ``` ### IP-Adapter-FaceID-SDXL Firstly, you should use [insightface](https://github.com/deepinsight/insightface) to extract face ID embedding: ```python import cv2 from insightface.app import FaceAnalysis import torch app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) app.prepare(ctx_id=0, det_size=(640, 640)) image = cv2.imread("person.jpg") faces = app.get(image) faceid_embeds = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0) ``` Then, you can generate images conditioned on the face embeddings: ```python import torch from diffusers import StableDiffusionXLPipeline, DDIMScheduler from PIL import Image from ip_adapter.ip_adapter_faceid import IPAdapterFaceIDXL base_model_path = "SG161222/RealVisXL_V3.0" ip_ckpt = "ip-adapter-faceid_sdxl.bin" device = "cuda" noise_scheduler = DDIMScheduler( num_train_timesteps=1000, beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False, steps_offset=1, ) pipe = StableDiffusionXLPipeline.from_pretrained( base_model_path, torch_dtype=torch.float16, scheduler=noise_scheduler, add_watermarker=False, ) # load ip-adapter ip_model = IPAdapterFaceIDXL(pipe, ip_ckpt, device) # generate image prompt = "A closeup shot of a beautiful Asian teenage girl in a white dress wearing small silver earrings in the garden, under the soft morning light" negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry" images = ip_model.generate( prompt=prompt, negative_prompt=negative_prompt, faceid_embeds=faceid_embeds, num_samples=2, width=1024, height=1024, num_inference_steps=30, guidance_scale=7.5, seed=2023 ) ``` ### IP-Adapter-FaceID-Plus Firstly, you should use [insightface](https://github.com/deepinsight/insightface) to extract face ID embedding and face image: ```python import cv2 from insightface.app import FaceAnalysis from insightface.utils import face_align import torch app = FaceAnalysis(name="buffalo_l", providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) app.prepare(ctx_id=0, det_size=(640, 640)) image = cv2.imread("person.jpg") faces = app.get(image) faceid_embeds = torch.from_numpy(faces[0].normed_embedding).unsqueeze(0) face_image = face_align.norm_crop(image, landmark=faces[0].kps, image_size=224) # you can also segment the face ``` Then, you can generate images conditioned on the face embeddings: ```python import torch from diffusers import StableDiffusionPipeline, DDIMScheduler, AutoencoderKL from PIL import Image from ip_adapter.ip_adapter_faceid import IPAdapterFaceIDPlus v2 = False base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE" vae_model_path = "stabilityai/sd-vae-ft-mse" image_encoder_path = "laion/CLIP-ViT-H-14-laion2B-s32B-b79K" ip_ckpt = "ip-adapter-faceid-plus_sd15.bin" if not v2 else "ip-adapter-faceid-plusv2_sd15.bin" device = "cuda" noise_scheduler = DDIMScheduler( num_train_timesteps=1000, beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False, steps_offset=1, ) vae = AutoencoderKL.from_pretrained(vae_model_path).to(dtype=torch.float16) pipe = StableDiffusionPipeline.from_pretrained( base_model_path, torch_dtype=torch.float16, scheduler=noise_scheduler, vae=vae, feature_extractor=None, safety_checker=None ) # load ip-adapter ip_model = IPAdapterFaceIDPlus(pipe, image_encoder_path, ip_ckpt, device) # generate image prompt = "photo of a woman in red dress in a garden" negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality, blurry" images = ip_model.generate( prompt=prompt, negative_prompt=negative_prompt, face_image=face_image, faceid_embeds=faceid_embeds, shortcut=v2, s_scale=1.0, num_samples=4, width=512, height=768, num_inference_steps=30, seed=2023 ) ``` ## Limitations and Bias - The model does not achieve perfect photorealism and ID consistency. - The generalization of the model is limited due to limitations of the training data, base model and face recognition model. ## Non-commercial use **This model is released exclusively for research purposes and is not intended for commercial use.**
MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-11T15:27:10Z
30
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T15:22:11Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B --- # OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.2-slerp OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/OpenHermes-2.5-neural-chat-7b-v3-1-7B-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
damiam/bge-base-en-v1.5-few-shot-sex
damiam
2024-01-11T15:19:43Z
47
0
setfit
[ "setfit", "pytorch", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:BAAI/bge-base-en-v1.5", "base_model:finetune:BAAI/bge-base-en-v1.5", "model-index", "region:us" ]
text-classification
2024-01-11T10:24:21Z
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: The skull protects the brain and eyes. - text: to her sex - text: play hide and seek - text: I was before - text: 'no' pipeline_tag: text-classification inference: true base_model: BAAI/bge-base-en-v1.5 model-index: - name: SetFit with BAAI/bge-base-en-v1.5 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 1.0 name: Accuracy --- # SetFit with BAAI/bge-base-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------| | sex | <ul><li>'I like to suck cock'</li><li>'suck me'</li><li>'What happens when you ride a human being'</li></ul> | | other | <ul><li>'ok'</li><li>'Can I ask a question?'</li><li>'THANK YOU'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 1.0 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("no") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 1 | 5.9314 | 29 | | Label | Training Sample Count | |:------|:----------------------| | other | 48 | | sex | 54 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (2, 2) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0060 | 1 | 0.2346 | - | | 0.2994 | 50 | 0.1766 | - | | 0.5988 | 100 | 0.0068 | - | | 0.8982 | 150 | 0.0033 | - | | 1.1976 | 200 | 0.0032 | - | | 1.4970 | 250 | 0.0027 | - | | 1.7964 | 300 | 0.0025 | - | ### Framework Versions - Python: 3.9.6 - SetFit: 1.0.1 - Sentence Transformers: 2.2.2 - Transformers: 4.29.2 - PyTorch: 1.13.1 - Datasets: 2.11.0 - Tokenizers: 0.13.3 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
jysssacc/627_roberta-base_fine_lr5e-06_bs4_epoch5_wd0.01
jysssacc
2024-01-11T15:11:01Z
43
0
transformers
[ "transformers", "safetensors", "roberta", "text-generation", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:finetune:FacebookAI/roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T15:03:22Z
--- license: mit base_model: roberta-base tags: - generated_from_trainer model-index: - name: 627_roberta-base_fine_lr5e-06_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 627_roberta-base_fine_lr5e-06_bs4_epoch5_wd0.01 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4585 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 16.9754 | 1.0 | 157 | 5.0326 | | 3.3766 | 2.0 | 314 | 2.4013 | | 2.2844 | 3.0 | 471 | 1.2212 | | 1.0349 | 4.0 | 628 | 0.5631 | | 0.7497 | 5.0 | 785 | 0.4585 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
simonycl/data-selection-Llama-2-7b-p3-uniform-epoch-15
simonycl
2024-01-11T15:05:48Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-01-11T15:05:37Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
LucciAI/LlamaDos-chat-hf
LucciAI
2024-01-11T15:03:04Z
9
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "llama2", "es", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T14:55:36Z
--- language: - es tags: - conversational - llama2 --- LlamaDos is a model oriented to have conversations in Spanish. It results from a finetuning of the Llama2-7b model by Meta using various optimization techniques such as LoRa, quantization, gradient accumulation and much more. This has allowed the training to be performed on a single consumer graph (RTX 3090). More specifically, more than 250,000 conversational data were used and the training took approximately 140 hours. More info on github: https://github.com/Garrachonr/LlamaDos The training has been performed following the original data structure of the Llama2 paper, so it is recommended to follow the same structure for inference: ```python <s>[INST] <<SYS>> {{ You are a helpful, respectful and honest conversational assistant. Have a conversation with the user in a natural way. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. }} <</SYS>> {{ user_msg_1 }} [/INST] {{ model_answer_1 }} </s><s>[INST] {{ user_msg_2 }} [/INST] {{ model_answer_1 }} </s> ``` In order to use this model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline base_model = AutoModelForCausalLM.from_pretrained( "garrachonr/llamaDos", low_cpu_mem_usage=True, return_dict=True, torch_dtype=torch.float16, device_map=device_map, ) tokenizer = AutoTokenizer.from_pretrained("garrachonr/llamaDos", trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" # Run text generation pipeline with llamaDos system_prompt = "You are a helpful, respectful and honest conversational assistant. Have a conversation with the user in a natural way. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature." prompt1 = "Acabo de adoptar un perro" prompt2 = "Muy buena decisión, te gustan los perros?" prompt3 = "Si, cuando era pequeño tenía uno y ahora he podido adoptar otro" text = "<s>[INST] <<SYS>> {} <</SYS>> {} [/INST] {} </s><s>[INST] {} [/INST]".format(system_prompt, prompt1, prompt2, prompt3) pipe = pipeline(task="text-generation", model=base_model, tokenizer=tokenizer, max_length=200) result = pipe(text) print(result[0]['generated_text']) ``` This work is funded by the Comunidad de Madrid through the call Research Grants for Young Investigators from Universidad Politécnica de Madrid (GENIUS:APOYO-JOVENES-21-TAXTYC-32-K61X37), and supported by the following projects: European Commission through Project ASTOUND (101071191–-HORIZON-EIC-2021-PATHFINDERCHALLENGES-01) and BEWORD (PID2021-126061OB-C43) funded by MCIN/AEI/10.13039/501100011033 and, as appropriate, by “ERDF A way of making Europe”, by the “European Union”. We also want to give thanks to MS Azure services (especially to Irving Kwong) for their sponsorship to translate into Spanish all dialogue databases.
MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-11T14:58:15Z
28
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "SciPhi/SciPhi-Mistral-7B-32k", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T14:53:31Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - SciPhi/SciPhi-Mistral-7B-32k --- # SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.2-slerp SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [SciPhi/SciPhi-Mistral-7B-32k](https://huggingface.co/SciPhi/SciPhi-Mistral-7B-32k) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: SciPhi/SciPhi-Mistral-7B-32k layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/SciPhi-Mistral-7B-32k-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
jysssacc/bloomz-560m_adalora_lr5e-06_bs4_epoch5_wd0.01
jysssacc
2024-01-11T14:55:09Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:bigscience/bloomz-560m", "base_model:adapter:bigscience/bloomz-560m", "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2024-01-11T14:53:37Z
--- license: bigscience-bloom-rail-1.0 library_name: peft tags: - generated_from_trainer base_model: bigscience/bloomz-560m model-index: - name: bloomz-560m_adalora_lr5e-06_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloomz-560m_adalora_lr5e-06_bs4_epoch5_wd0.01 This model is a fine-tuned version of [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.5088 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.0323 | 1.0 | 157 | 4.7024 | | 4.9929 | 2.0 | 314 | 4.6622 | | 4.9277 | 3.0 | 471 | 4.5968 | | 4.8543 | 4.0 | 628 | 4.5309 | | 4.8158 | 5.0 | 785 | 4.5088 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
habetyan/distilbert-base-uncased-finetuned-squad
habetyan
2024-01-11T14:54:09Z
47
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-10-09T09:04:05Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: habetyan/distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # habetyan/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.4944 - Train End Logits Accuracy: 0.6106 - Train Start Logits Accuracy: 0.5711 - Validation Loss: 1.1621 - Validation End Logits Accuracy: 0.6856 - Validation Start Logits Accuracy: 0.6450 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.4944 | 0.6106 | 0.5711 | 1.1621 | 0.6856 | 0.6450 | 0 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.0
ethompson93/a2c-PandaReachDense-v3
ethompson93
2024-01-11T14:53:57Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-11T14:49:06Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.52 +/- 0.73 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
MaziyarPanahi/go-bruins-v2-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-11T14:46:44Z
27
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "rwitz/go-bruins-v2", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T14:41:42Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - rwitz/go-bruins-v2 --- # go-bruins-v2-Mistral-7B-Instruct-v0.2-slerp go-bruins-v2-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: rwitz/go-bruins-v2 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/go-bruins-v2-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
yurimalheiros/corgy_dog_LoRA
yurimalheiros
2024-01-11T14:44:28Z
1
2
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-11T14:44:21Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of TOK dog license: openrail++ --- # SDXL LoRA DreamBooth - yurimalheiros/corgy_dog_LoRA <Gallery /> ## Model description These are yurimalheiros/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK dog to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](yurimalheiros/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
raoel/bert-finetuned-ASAP-AEStask
raoel
2024-01-11T14:40:10Z
49
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-11T13:54:11Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_keras_callback model-index: - name: raoel/bert-finetuned-ASAP-AEStask results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # raoel/bert-finetuned-ASAP-AEStask This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3250 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.1863 | 0 | | 0.8763 | 1 | | 0.7140 | 2 | | 0.5078 | 3 | | 0.3250 | 4 | ### Framework versions - Transformers 4.33.0 - TensorFlow 2.12.0 - Datasets 2.1.0 - Tokenizers 0.13.3
jysssacc/627_roberta-base_lora_lr5e-06_bs4_epoch5_wd0.01
jysssacc
2024-01-11T14:38:19Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2024-01-11T13:30:44Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: roberta-base model-index: - name: 627_roberta-base_lora_lr5e-06_bs4_epoch5_wd0.01 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 627_roberta-base_lora_lr5e-06_bs4_epoch5_wd0.01 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.9114 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 19.8663 | 1.0 | 157 | 21.5215 | | 18.0482 | 2.0 | 314 | 19.0530 | | 15.5817 | 3.0 | 471 | 12.3029 | | 8.038 | 4.0 | 628 | 6.7318 | | 6.4436 | 5.0 | 785 | 5.9114 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.0.1 - Datasets 2.16.1 - Tokenizers 0.15.0
MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.2-slerp
MaziyarPanahi
2024-01-11T14:37:26Z
26
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "7b", "lazymergekit", "mistralai/Mistral-7B-Instruct-v0.2", "maywell/koOpenChat-sft", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-11T14:32:33Z
--- license: apache-2.0 tags: - merge - mergekit - mistral - 7b - lazymergekit - mistralai/Mistral-7B-Instruct-v0.2 - maywell/koOpenChat-sft --- # koOpenChat-sft-Mistral-7B-Instruct-v0.2-slerp koOpenChat-sft-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [maywell/koOpenChat-sft](https://huggingface.co/maywell/koOpenChat-sft) ## 🧩 Configuration ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.2 layer_range: [0, 32] - model: maywell/koOpenChat-sft layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-Instruct-v0.2 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/koOpenChat-sft-Mistral-7B-Instruct-v0.2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```