modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-27 12:29:05
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-27 12:27:55
card
stringlengths
11
1.01M
TechxGenus/CursorCore-DS-6.7B-AWQ
TechxGenus
2024-10-10T06:38:04Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "conversational", "arxiv:2410.07002", "base_model:TechxGenus/CursorCore-DS-6.7B", "base_model:quantized:TechxGenus/CursorCore-DS-6.7B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-10-06T12:55:15Z
--- tags: - code base_model: - TechxGenus/CursorCore-DS-6.7B library_name: transformers pipeline_tag: text-generation license: other license_name: deepseek license_link: LICENSE --- # CursorCore: Assist Programming through Aligning Anything <p align="center"> <a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> | <a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> | <a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> | <a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> | <a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> | <a href="https://discord.gg/Z5Tev8fV">[Discord]</a> </p> <hr> - [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything) - [Introduction](#introduction) - [Models](#models) - [Usage](#usage) - [1) Normal chat](#1-normal-chat) - [2) Assistant-Conversation](#2-assistant-conversation) - [3) Web Demo](#3-web-demo) - [Future Work](#future-work) - [Citation](#citation) - [Contribution](#contribution) <hr> ## Introduction CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more. <p align="center"> <img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png"> </p> ![CursorWeb](https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/CursorWeb.gif) ## Models Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3) ## Usage Here are some examples of how to use our model: ### 1) Normal chat Script: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) messages = [ {"role": "user", "content": "Hi!"}, ] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512) print(tokenizer.decode(outputs[0])) ```` Output: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>user Hi!<|im_end|> <|im_start|>assistant Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|> ```` ### 2) Assistant-Conversation In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat. Script 1: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [ { "type": "code", "lang": "python", "code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" } ], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "" } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 1: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>history ```python def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): if len(array) <= 1: return array pivot = array[len(array) // 2] left = [x for x in array if x < pivot] middle = [x for x in array if x == pivot] right = [x for x in array if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|> The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors. To implement this, we will: 1. Update the parameter name in the function definition from `arr` to `array`. 2. Ensure that all references to `arr` within the function are updated to `array`. This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|> ```` Script 2: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 2: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): """ This is an implementation of the quick sort algorithm. """ if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|><|im_end|> ```` For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows: Script for LC: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_lc tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-LC", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_lc(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for LC: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python 1 def quick_sort(array): 2 if len(arr) <= 1: 3 return arr 4 pivot = arr[len(arr) // 2] 5 left = [x for x in arr if x < pivot] 6 middle = [x for x in arr if x == pivot] 7 right = [x for x in arr if x > pivot] 8 return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>1,1 ``` '''This function will sort an array using quick sort algorithm''' ```<|next_end|> To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future. The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand. Here's the plan: 1. Add a docstring at the beginning of the `quick_sort` function. 2. Ensure the docstring is clear and concise, describing the purpose of the function. This modification will improve the code's documentation without altering its functionality.<|im_end|> ```` Script for SR: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_sr tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-SR", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_sr(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for SR: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): <|search_and_replace|> def quick_sort(array): """ This function implements quick sort algorithm """ ```<|next_end|><|im_end|> ```` ### 3) Web Demo We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details. ## Future Work CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example: - Repository-level editing support - Better and faster editing formats - Better user interface and presentation - ... ## Citation ```bibtex @article{jiang2024cursorcore, title = {CursorCore: Assist Programming through Aligning Anything}, author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang}, year = {2024}, journal = {arXiv preprint arXiv: 2410.07002} } ``` ## Contribution Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
TechxGenus/CursorCore-DS-6.7B-GPTQ
TechxGenus
2024-10-10T06:38:00Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "conversational", "arxiv:2410.07002", "base_model:TechxGenus/CursorCore-DS-6.7B", "base_model:quantized:TechxGenus/CursorCore-DS-6.7B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-10-08T04:53:27Z
--- tags: - code base_model: - TechxGenus/CursorCore-DS-6.7B library_name: transformers pipeline_tag: text-generation license: other license_name: deepseek license_link: LICENSE --- # CursorCore: Assist Programming through Aligning Anything <p align="center"> <a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> | <a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> | <a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> | <a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> | <a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> | <a href="https://discord.gg/Z5Tev8fV">[Discord]</a> </p> <hr> - [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything) - [Introduction](#introduction) - [Models](#models) - [Usage](#usage) - [1) Normal chat](#1-normal-chat) - [2) Assistant-Conversation](#2-assistant-conversation) - [3) Web Demo](#3-web-demo) - [Future Work](#future-work) - [Citation](#citation) - [Contribution](#contribution) <hr> ## Introduction CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more. <p align="center"> <img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png"> </p> ![CursorWeb](https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/CursorWeb.gif) ## Models Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3) ## Usage Here are some examples of how to use our model: ### 1) Normal chat Script: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) messages = [ {"role": "user", "content": "Hi!"}, ] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512) print(tokenizer.decode(outputs[0])) ```` Output: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>user Hi!<|im_end|> <|im_start|>assistant Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|> ```` ### 2) Assistant-Conversation In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat. Script 1: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [ { "type": "code", "lang": "python", "code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" } ], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "" } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 1: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>history ```python def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): if len(array) <= 1: return array pivot = array[len(array) // 2] left = [x for x in array if x < pivot] middle = [x for x in array if x == pivot] right = [x for x in array if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|> The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors. To implement this, we will: 1. Update the parameter name in the function definition from `arr` to `array`. 2. Ensure that all references to `arr` within the function are updated to `array`. This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|> ```` Script 2: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 2: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): """ This is an implementation of the quick sort algorithm. """ if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|><|im_end|> ```` For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows: Script for LC: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_lc tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-LC", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_lc(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for LC: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python 1 def quick_sort(array): 2 if len(arr) <= 1: 3 return arr 4 pivot = arr[len(arr) // 2] 5 left = [x for x in arr if x < pivot] 6 middle = [x for x in arr if x == pivot] 7 right = [x for x in arr if x > pivot] 8 return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>1,1 ``` '''This function will sort an array using quick sort algorithm''' ```<|next_end|> To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future. The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand. Here's the plan: 1. Add a docstring at the beginning of the `quick_sort` function. 2. Ensure the docstring is clear and concise, describing the purpose of the function. This modification will improve the code's documentation without altering its functionality.<|im_end|> ```` Script for SR: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_sr tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-SR", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_sr(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for SR: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): <|search_and_replace|> def quick_sort(array): """ This function implements quick sort algorithm """ ```<|next_end|><|im_end|> ```` ### 3) Web Demo We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details. ## Future Work CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example: - Repository-level editing support - Better and faster editing formats - Better user interface and presentation - ... ## Citation ```bibtex @article{jiang2024cursorcore, title = {CursorCore: Assist Programming through Aligning Anything}, author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang}, year = {2024}, journal = {arXiv preprint arXiv: 2410.07002} } ``` ## Contribution Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
TechxGenus/CursorCore-DS-1.3B-SR-AWQ
TechxGenus
2024-10-10T06:37:40Z
79
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "conversational", "arxiv:2410.07002", "base_model:TechxGenus/CursorCore-DS-1.3B-SR", "base_model:quantized:TechxGenus/CursorCore-DS-1.3B-SR", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-10-08T04:51:11Z
--- tags: - code base_model: - TechxGenus/CursorCore-DS-1.3B-SR library_name: transformers pipeline_tag: text-generation license: other license_name: deepseek license_link: LICENSE --- # CursorCore: Assist Programming through Aligning Anything <p align="center"> <a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> | <a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> | <a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> | <a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> | <a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> | <a href="https://discord.gg/Z5Tev8fV">[Discord]</a> </p> <hr> - [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything) - [Introduction](#introduction) - [Models](#models) - [Usage](#usage) - [1) Normal chat](#1-normal-chat) - [2) Assistant-Conversation](#2-assistant-conversation) - [3) Web Demo](#3-web-demo) - [Future Work](#future-work) - [Citation](#citation) - [Contribution](#contribution) <hr> ## Introduction CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more. <p align="center"> <img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png"> </p> ![CursorWeb](https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/CursorWeb.gif) ## Models Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3) ## Usage Here are some examples of how to use our model: ### 1) Normal chat Script: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) messages = [ {"role": "user", "content": "Hi!"}, ] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512) print(tokenizer.decode(outputs[0])) ```` Output: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>user Hi!<|im_end|> <|im_start|>assistant Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|> ```` ### 2) Assistant-Conversation In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat. Script 1: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [ { "type": "code", "lang": "python", "code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" } ], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "" } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 1: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>history ```python def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): if len(array) <= 1: return array pivot = array[len(array) // 2] left = [x for x in array if x < pivot] middle = [x for x in array if x == pivot] right = [x for x in array if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|> The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors. To implement this, we will: 1. Update the parameter name in the function definition from `arr` to `array`. 2. Ensure that all references to `arr` within the function are updated to `array`. This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|> ```` Script 2: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 2: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): """ This is an implementation of the quick sort algorithm. """ if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|><|im_end|> ```` For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows: Script for LC: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_lc tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-LC", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_lc(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for LC: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python 1 def quick_sort(array): 2 if len(arr) <= 1: 3 return arr 4 pivot = arr[len(arr) // 2] 5 left = [x for x in arr if x < pivot] 6 middle = [x for x in arr if x == pivot] 7 right = [x for x in arr if x > pivot] 8 return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>1,1 ``` '''This function will sort an array using quick sort algorithm''' ```<|next_end|> To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future. The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand. Here's the plan: 1. Add a docstring at the beginning of the `quick_sort` function. 2. Ensure the docstring is clear and concise, describing the purpose of the function. This modification will improve the code's documentation without altering its functionality.<|im_end|> ```` Script for SR: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_sr tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-SR", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_sr(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for SR: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): <|search_and_replace|> def quick_sort(array): """ This function implements quick sort algorithm """ ```<|next_end|><|im_end|> ```` ### 3) Web Demo We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details. ## Future Work CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example: - Repository-level editing support - Better and faster editing formats - Better user interface and presentation - ... ## Citation ```bibtex @article{jiang2024cursorcore, title = {CursorCore: Assist Programming through Aligning Anything}, author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang}, year = {2024}, journal = {arXiv preprint arXiv: 2410.07002} } ``` ## Contribution Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
TechxGenus/CursorCore-DS-1.3B-SR-GPTQ
TechxGenus
2024-10-10T06:37:35Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "conversational", "arxiv:2410.07002", "base_model:TechxGenus/CursorCore-DS-1.3B-SR", "base_model:quantized:TechxGenus/CursorCore-DS-1.3B-SR", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-10-08T04:51:58Z
--- tags: - code base_model: - TechxGenus/CursorCore-DS-1.3B-SR library_name: transformers pipeline_tag: text-generation license: other license_name: deepseek license_link: LICENSE --- # CursorCore: Assist Programming through Aligning Anything <p align="center"> <a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> | <a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> | <a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> | <a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> | <a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> | <a href="https://discord.gg/Z5Tev8fV">[Discord]</a> </p> <hr> - [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything) - [Introduction](#introduction) - [Models](#models) - [Usage](#usage) - [1) Normal chat](#1-normal-chat) - [2) Assistant-Conversation](#2-assistant-conversation) - [3) Web Demo](#3-web-demo) - [Future Work](#future-work) - [Citation](#citation) - [Contribution](#contribution) <hr> ## Introduction CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more. <p align="center"> <img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png"> </p> ![CursorWeb](https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/CursorWeb.gif) ## Models Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3) ## Usage Here are some examples of how to use our model: ### 1) Normal chat Script: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) messages = [ {"role": "user", "content": "Hi!"}, ] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512) print(tokenizer.decode(outputs[0])) ```` Output: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>user Hi!<|im_end|> <|im_start|>assistant Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|> ```` ### 2) Assistant-Conversation In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat. Script 1: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [ { "type": "code", "lang": "python", "code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" } ], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "" } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 1: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>history ```python def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): if len(array) <= 1: return array pivot = array[len(array) // 2] left = [x for x in array if x < pivot] middle = [x for x in array if x == pivot] right = [x for x in array if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|> The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors. To implement this, we will: 1. Update the parameter name in the function definition from `arr` to `array`. 2. Ensure that all references to `arr` within the function are updated to `array`. This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|> ```` Script 2: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 2: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): """ This is an implementation of the quick sort algorithm. """ if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|><|im_end|> ```` For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows: Script for LC: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_lc tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-LC", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_lc(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for LC: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python 1 def quick_sort(array): 2 if len(arr) <= 1: 3 return arr 4 pivot = arr[len(arr) // 2] 5 left = [x for x in arr if x < pivot] 6 middle = [x for x in arr if x == pivot] 7 right = [x for x in arr if x > pivot] 8 return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>1,1 ``` '''This function will sort an array using quick sort algorithm''' ```<|next_end|> To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future. The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand. Here's the plan: 1. Add a docstring at the beginning of the `quick_sort` function. 2. Ensure the docstring is clear and concise, describing the purpose of the function. This modification will improve the code's documentation without altering its functionality.<|im_end|> ```` Script for SR: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_sr tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-SR", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_sr(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for SR: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): <|search_and_replace|> def quick_sort(array): """ This function implements quick sort algorithm """ ```<|next_end|><|im_end|> ```` ### 3) Web Demo We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details. ## Future Work CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example: - Repository-level editing support - Better and faster editing formats - Better user interface and presentation - ... ## Citation ```bibtex @article{jiang2024cursorcore, title = {CursorCore: Assist Programming through Aligning Anything}, author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang}, year = {2024}, journal = {arXiv preprint arXiv: 2410.07002} } ``` ## Contribution Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
TechxGenus/CursorCore-DS-1.3B-LC-GPTQ
TechxGenus
2024-10-10T06:37:16Z
78
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "conversational", "arxiv:2410.07002", "base_model:TechxGenus/CursorCore-DS-1.3B-LC", "base_model:quantized:TechxGenus/CursorCore-DS-1.3B-LC", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-10-08T04:50:49Z
--- tags: - code base_model: - TechxGenus/CursorCore-DS-1.3B-LC library_name: transformers pipeline_tag: text-generation license: other license_name: deepseek license_link: LICENSE --- # CursorCore: Assist Programming through Aligning Anything <p align="center"> <a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> | <a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> | <a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> | <a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> | <a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> | <a href="https://discord.gg/Z5Tev8fV">[Discord]</a> </p> <hr> - [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything) - [Introduction](#introduction) - [Models](#models) - [Usage](#usage) - [1) Normal chat](#1-normal-chat) - [2) Assistant-Conversation](#2-assistant-conversation) - [3) Web Demo](#3-web-demo) - [Future Work](#future-work) - [Citation](#citation) - [Contribution](#contribution) <hr> ## Introduction CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more. <p align="center"> <img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png"> </p> ![CursorWeb](https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/CursorWeb.gif) ## Models Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3) ## Usage Here are some examples of how to use our model: ### 1) Normal chat Script: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) messages = [ {"role": "user", "content": "Hi!"}, ] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512) print(tokenizer.decode(outputs[0])) ```` Output: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>user Hi!<|im_end|> <|im_start|>assistant Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|> ```` ### 2) Assistant-Conversation In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat. Script 1: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [ { "type": "code", "lang": "python", "code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" } ], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "" } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 1: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>history ```python def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): if len(array) <= 1: return array pivot = array[len(array) // 2] left = [x for x in array if x < pivot] middle = [x for x in array if x == pivot] right = [x for x in array if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|> The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors. To implement this, we will: 1. Update the parameter name in the function definition from `arr` to `array`. 2. Ensure that all references to `arr` within the function are updated to `array`. This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|> ```` Script 2: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 2: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): """ This is an implementation of the quick sort algorithm. """ if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|><|im_end|> ```` For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows: Script for LC: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_lc tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-LC", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_lc(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for LC: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python 1 def quick_sort(array): 2 if len(arr) <= 1: 3 return arr 4 pivot = arr[len(arr) // 2] 5 left = [x for x in arr if x < pivot] 6 middle = [x for x in arr if x == pivot] 7 right = [x for x in arr if x > pivot] 8 return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>1,1 ``` '''This function will sort an array using quick sort algorithm''' ```<|next_end|> To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future. The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand. Here's the plan: 1. Add a docstring at the beginning of the `quick_sort` function. 2. Ensure the docstring is clear and concise, describing the purpose of the function. This modification will improve the code's documentation without altering its functionality.<|im_end|> ```` Script for SR: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_sr tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-SR", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_sr(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for SR: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): <|search_and_replace|> def quick_sort(array): """ This function implements quick sort algorithm """ ```<|next_end|><|im_end|> ```` ### 3) Web Demo We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details. ## Future Work CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example: - Repository-level editing support - Better and faster editing formats - Better user interface and presentation - ... ## Citation ```bibtex @article{jiang2024cursorcore, title = {CursorCore: Assist Programming through Aligning Anything}, author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang}, year = {2024}, journal = {arXiv preprint arXiv: 2410.07002} } ``` ## Contribution Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
TechxGenus/CursorCore-DS-1.3B-GPTQ
TechxGenus
2024-10-10T06:36:27Z
77
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "conversational", "arxiv:2410.07002", "base_model:TechxGenus/CursorCore-DS-1.3B", "base_model:quantized:TechxGenus/CursorCore-DS-1.3B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-10-08T04:50:14Z
--- tags: - code base_model: - TechxGenus/CursorCore-DS-1.3B library_name: transformers pipeline_tag: text-generation license: other license_name: deepseek license_link: LICENSE --- # CursorCore: Assist Programming through Aligning Anything <p align="center"> <a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> | <a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> | <a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> | <a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> | <a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> | <a href="https://discord.gg/Z5Tev8fV">[Discord]</a> </p> <hr> - [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything) - [Introduction](#introduction) - [Models](#models) - [Usage](#usage) - [1) Normal chat](#1-normal-chat) - [2) Assistant-Conversation](#2-assistant-conversation) - [3) Web Demo](#3-web-demo) - [Future Work](#future-work) - [Citation](#citation) - [Contribution](#contribution) <hr> ## Introduction CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more. <p align="center"> <img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png"> </p> ![CursorWeb](https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/CursorWeb.gif) ## Models Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3) ## Usage Here are some examples of how to use our model: ### 1) Normal chat Script: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) messages = [ {"role": "user", "content": "Hi!"}, ] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512) print(tokenizer.decode(outputs[0])) ```` Output: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>user Hi!<|im_end|> <|im_start|>assistant Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|> ```` ### 2) Assistant-Conversation In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat. Script 1: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [ { "type": "code", "lang": "python", "code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" } ], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "" } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 1: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>history ```python def quick_sort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): if len(array) <= 1: return array pivot = array[len(array) // 2] left = [x for x in array if x < pivot] middle = [x for x in array if x == pivot] right = [x for x in array if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|> The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors. To implement this, we will: 1. Update the parameter name in the function definition from `arr` to `array`. 2. Ensure that all references to `arr` within the function are updated to `array`. This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|> ```` Script 2: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_wf tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-9B", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_wf(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output 2: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): """ This is an implementation of the quick sort algorithm. """ if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|next_end|><|im_end|> ```` For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows: Script for LC: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_lc tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-LC", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_lc(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for LC: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python 1 def quick_sort(array): 2 if len(arr) <= 1: 3 return arr 4 pivot = arr[len(arr) // 2] 5 left = [x for x in arr if x < pivot] 6 middle = [x for x in arr if x == pivot] 7 right = [x for x in arr if x > pivot] 8 return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>1,1 ``` '''This function will sort an array using quick sort algorithm''' ```<|next_end|> To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future. The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand. Here's the plan: 1. Add a docstring at the beginning of the `quick_sort` function. 2. Ensure the docstring is clear and concise, describing the purpose of the function. This modification will improve the code's documentation without altering its functionality.<|im_end|> ```` Script for SR: ````python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from eval.utils import prepare_input_for_sr tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR") model = AutoModelForCausalLM.from_pretrained( "TechxGenus/CursorCore-Yi-1.5B-SR", torch_dtype=torch.bfloat16, device_map="auto" ) sample = { "history": [], "current": { "type": "code", "lang": "python", "code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)""" }, "user": "Add Docstring." } prompt = tokenizer.apply_chat_template( prepare_input_for_sr(sample), tokenize=False, chat_template="assistant-conversation", add_generation_prompt=True ) inputs = tokenizer.encode(prompt, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False) print(tokenizer.decode(outputs[0])) ```` Output for SR: ````txt <|im_start|>system You are a helpful programming assistant.<|im_end|> <|im_start|>current ```python def quick_sort(array): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quick_sort(left) + middle + quick_sort(right) ```<|im_end|> <|im_start|>user Add Docstring.<|im_end|> <|im_start|>assistant <|next_start|>```python def quick_sort(array): <|search_and_replace|> def quick_sort(array): """ This function implements quick sort algorithm """ ```<|next_end|><|im_end|> ```` ### 3) Web Demo We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details. ## Future Work CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example: - Repository-level editing support - Better and faster editing formats - Better user interface and presentation - ... ## Citation ```bibtex @article{jiang2024cursorcore, title = {CursorCore: Assist Programming through Aligning Anything}, author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang}, year = {2024}, journal = {arXiv preprint arXiv: 2410.07002} } ``` ## Contribution Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
mradermacher/Vi-Qwen2-3B-RAG-GGUF
mradermacher
2024-10-10T06:35:06Z
129
0
transformers
[ "transformers", "gguf", "retrieval-augmented-generation", "text-generation-inference", "vi", "base_model:AITeamVN/Vi-Qwen2-3B-RAG", "base_model:quantized:AITeamVN/Vi-Qwen2-3B-RAG", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-10T06:28:37Z
--- base_model: AITeamVN/Vi-Qwen2-3B-RAG language: - vi library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - retrieval-augmented-generation - text-generation-inference --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AITeamVN/Vi-Qwen2-3B-RAG <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-3B-RAG-GGUF/resolve/main/Vi-Qwen2-3B-RAG.Q2_K.gguf) | Q2_K | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-3B-RAG-GGUF/resolve/main/Vi-Qwen2-3B-RAG.Q3_K_S.gguf) | Q3_K_S | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-3B-RAG-GGUF/resolve/main/Vi-Qwen2-3B-RAG.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-3B-RAG-GGUF/resolve/main/Vi-Qwen2-3B-RAG.Q3_K_L.gguf) | Q3_K_L | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-3B-RAG-GGUF/resolve/main/Vi-Qwen2-3B-RAG.IQ4_XS.gguf) | IQ4_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-3B-RAG-GGUF/resolve/main/Vi-Qwen2-3B-RAG.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-3B-RAG-GGUF/resolve/main/Vi-Qwen2-3B-RAG.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-3B-RAG-GGUF/resolve/main/Vi-Qwen2-3B-RAG.Q5_K_S.gguf) | Q5_K_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-3B-RAG-GGUF/resolve/main/Vi-Qwen2-3B-RAG.Q5_K_M.gguf) | Q5_K_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-3B-RAG-GGUF/resolve/main/Vi-Qwen2-3B-RAG.Q6_K.gguf) | Q6_K | 2.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-3B-RAG-GGUF/resolve/main/Vi-Qwen2-3B-RAG.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-3B-RAG-GGUF/resolve/main/Vi-Qwen2-3B-RAG.f16.gguf) | f16 | 6.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Vi-Qwen2-1.5B-RAG-GGUF
mradermacher
2024-10-10T06:34:07Z
23
0
transformers
[ "transformers", "gguf", "retrieval-augmented-generation", "text-generation-inference", "vi", "base_model:AITeamVN/Vi-Qwen2-1.5B-RAG", "base_model:quantized:AITeamVN/Vi-Qwen2-1.5B-RAG", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-10T06:27:19Z
--- base_model: AITeamVN/Vi-Qwen2-1.5B-RAG language: - vi library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - retrieval-augmented-generation - text-generation-inference --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AITeamVN/Vi-Qwen2-1.5B-RAG <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-1.5B-RAG-GGUF/resolve/main/Vi-Qwen2-1.5B-RAG.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-1.5B-RAG-GGUF/resolve/main/Vi-Qwen2-1.5B-RAG.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-1.5B-RAG-GGUF/resolve/main/Vi-Qwen2-1.5B-RAG.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-1.5B-RAG-GGUF/resolve/main/Vi-Qwen2-1.5B-RAG.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-1.5B-RAG-GGUF/resolve/main/Vi-Qwen2-1.5B-RAG.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-1.5B-RAG-GGUF/resolve/main/Vi-Qwen2-1.5B-RAG.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-1.5B-RAG-GGUF/resolve/main/Vi-Qwen2-1.5B-RAG.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-1.5B-RAG-GGUF/resolve/main/Vi-Qwen2-1.5B-RAG.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-1.5B-RAG-GGUF/resolve/main/Vi-Qwen2-1.5B-RAG.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-1.5B-RAG-GGUF/resolve/main/Vi-Qwen2-1.5B-RAG.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-1.5B-RAG-GGUF/resolve/main/Vi-Qwen2-1.5B-RAG.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Vi-Qwen2-1.5B-RAG-GGUF/resolve/main/Vi-Qwen2-1.5B-RAG.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
QuantFactory/gpt2-xl-GGUF
QuantFactory
2024-10-10T06:27:09Z
225
2
null
[ "gguf", "en", "arxiv:1910.09700", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-10-10T05:53:29Z
--- language: en license: mit --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/gpt2-xl-GGUF This is quantized version of [openai-community/gpt2-xl](https://huggingface.co/openai-community/gpt2-xl) created using llama.cpp # Original Model Card # GPT-2 XL ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-authors) ## Model Details **Model Description:** GPT-2 XL is the **1.5B parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. - **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE) - **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-Large](https://huggingface.co/gpt2-large) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) - [OpenAI Blog Post](https://openai.com/blog/better-language-models/) - [GitHub Repo](https://github.com/openai/gpt-2) - [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md) - [OpenAI GPT-2 1.5B Release Blog Post](https://openai.com/blog/gpt-2-1-5b-release/) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python from transformers import pipeline, set_seed generator = pipeline('text-generation', model='gpt2-xl') set_seed(42) generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl') model = GPT2Model.from_pretrained('gpt2-xl') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl') model = TFGPT2Model.from_pretrained('gpt2-xl') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Uses #### Direct Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > The primary intended users of these models are AI researchers and practitioners. > > We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. #### Downstream Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Here are some secondary use cases we believe are likely: > > - Writing assistance: Grammar assistance, autocompletion (for normal prose or code) > - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art. > - Entertainment: Creation of games, chat bots, and amusing generations. #### Misuse and Out-of-scope Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** #### Biases Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python from transformers import pipeline, set_seed generator = pipeline('text-generation', model='gpt2-xl') set_seed(42) generator("The man worked as a", max_length=10, num_return_sequences=5) set_seed(42) generator("The woman worked as a", max_length=10, num_return_sequences=5) ``` This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. #### Risks and Limitations When they released the 1.5B parameter model, OpenAI wrote in a [blog post](https://openai.com/blog/gpt-2-1-5b-release/): > GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text. The blog post further discusses the risks, limitations, and biases of the model. ## Training #### Training Data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). #### Training Procedure The model is pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. ## Evaluation The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). #### Testing Data, Factors and Metrics The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that: > Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. #### Results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 8.63 | 63.24 | 93.30 | 89.05 | 18.34 | 35.76 | 0.93 | 0.98 | 17.48 | 42.16 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware type and hours used are based on information provided by one of the model authors on [Reddit](https://bit.ly/2Tw1x4L). - **Hardware Type:** 32 TPUv3 chips - **Hours used:** 168 - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, and training details. ## Citation Information ```bibtex @article{radford2019language, title={Language models are unsupervised multitask learners}, author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others}, journal={OpenAI blog}, volume={1}, number={8}, pages={9}, year={2019} } ``` ## Model Card Authors This model card was written by the Hugging Face team.
mradermacher/nature-buddy-GGUF
mradermacher
2024-10-10T06:25:41Z
30
0
transformers
[ "transformers", "gguf", "trl", "sft", "generated_from_trainer", "en", "base_model:nkasmanoff/nature-buddy", "base_model:quantized:nkasmanoff/nature-buddy", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-10T06:23:59Z
--- base_model: nkasmanoff/nature-buddy language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - trl - sft - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/nkasmanoff/nature-buddy <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/nature-buddy-GGUF/resolve/main/nature-buddy.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/nature-buddy-GGUF/resolve/main/nature-buddy.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/nature-buddy-GGUF/resolve/main/nature-buddy.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/nature-buddy-GGUF/resolve/main/nature-buddy.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/nature-buddy-GGUF/resolve/main/nature-buddy.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/nature-buddy-GGUF/resolve/main/nature-buddy.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/nature-buddy-GGUF/resolve/main/nature-buddy.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/nature-buddy-GGUF/resolve/main/nature-buddy.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/nature-buddy-GGUF/resolve/main/nature-buddy.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/nature-buddy-GGUF/resolve/main/nature-buddy.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/nature-buddy-GGUF/resolve/main/nature-buddy.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/nature-buddy-GGUF/resolve/main/nature-buddy.f16.gguf) | f16 | 0.4 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
keeeeenw/MicroLlama
keeeeenw
2024-10-10T06:19:37Z
1,035
42
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:cerebras/SlimPajama-627B", "arxiv:2401.02385", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-03-29T04:23:22Z
--- language: - en license: apache-2.0 library_name: transformers datasets: - cerebras/SlimPajama-627B metrics: - accuracy model-index: - name: MicroLlama results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 19.85 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=keeeeenw/MicroLlama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 2.83 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=keeeeenw/MicroLlama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 0.0 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=keeeeenw/MicroLlama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 1.45 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=keeeeenw/MicroLlama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 4.79 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=keeeeenw/MicroLlama name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 1.53 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=keeeeenw/MicroLlama name: Open LLM Leaderboard --- # Model Card for Model ID As an individual with limited access and compute, I have been wondering if I could build a decent large-language model for a while. As the big mega corporations are focused on getting bigger and bigger models, I am going small! As a result, I set up the following goals to **pretraining** a **300M Llama model** with the following restrictions: 1. My overall budget is $500. 2. Must pretrain an LLM from scratch with a fully open-source dataset and model. 3. Not allowed to finetune a model or use another LLM such as GPT-4 to generate any training data. ## Model Details This project is heavily based on [TinyLlama](https://github.com/jzhang38/TinyLlama), which is an awesome open-source project aimed to **pretraining** a **1.1.1B Llama model on 1T tokens**. This project is work in progress. Currently, I have spent \$280 on compute using 4 x Nvidia 4090 on [Vast.ai](https://vast.ai) and \$3 on AWS S3 storage after 4 days of training of the **300M Llama model** with **50B** tokens. I modified [TinyLlama](https://github.com/jzhang38/TinyLlama) to support the following features (I will release my forked version of the source code after some clean up): 1. Pretrain a smaller size 300M model on [Slimpajama](https://huggingface.co/datasets/cerebras/slimpajama-627b) 2. Removed [Starcoderdata](https://huggingface.co/datasets/bigcode/starcoderdata) so that my model can focus on [Slimpajama](https://huggingface.co/datasets/cerebras/slimpajama-627b). This also means my model probably cannot do coding without fine-tuning. 3. Added the ability to process and tokenize [Slimpajama](https://huggingface.co/datasets/cerebras/slimpajama-627b) while downloading the data. The original setup only works with pre-downloaded data. This turns out to be a good time-saver because downloading 800G+ of data on a non-commercial Internet is very slow, and processing all of [Slimpajama](https://huggingface.co/datasets/cerebras/slimpajama-627b) data also takes time. 4. Various helper scripts and Python code such as python code for uploading the pretrained checkpoint to the huggingface hub. 5. Bug fixes. Here are my major model configurations based on [TinyLlama](https://github.com/jzhang38/TinyLlama) settings. ``` block_size=2048, vocab_size=32000, padding_multiple=64, n_layer=12, n_head=16, n_embd=1024, rotary_percentage=1.0, parallel_residual=False, bias=False, _norm_class="FusedRMSNorm", norm_eps=1e-5, #Llama 2 use 1e-5. Llama 1 use 1e-6 _mlp_class="LLaMAMLP", intermediate_size=5632, n_query_groups=4, ``` ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** keeeeenw - **Funded by:** myself for <$500 - **Model type:** 300M Llama model - **Language(s) (NLP):** EN - **License:** Apache License 2.0 <!-- **Finetuned from model [optional]:** [More Information Needed]--> ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/keeeeenw/MicroLlama <!-- **Paper [optional]:** [More Information Needed] --> <!--**Demo [optional]:** [More Information Needed] --> ## Uses 1. Install dependencies ``` pip install transformers pip install torch ``` 2. Run code! ```python import torch import transformers from transformers import AutoTokenizer, LlamaForCausalLM def generate_text(prompt, model, tokenizer): text_generator = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", tokenizer=tokenizer ) formatted_prompt = f"Question: {prompt} Answer:" sequences = text_generator( formatted_prompt, do_sample=True, top_k=5, top_p=0.9, num_return_sequences=1, repetition_penalty=1.5, max_new_tokens=128, ) for seq in sequences: print(f"Result: {seq['generated_text']}") # use the same tokenizer as TinyLlama tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-step-50K-105b") # load model from huggingface # question from https://www.reddit.com/r/LocalLLaMA/comments/13zz8y5/what_questions_do_you_ask_llms_to_check_their/ model = LlamaForCausalLM.from_pretrained( "keeeeenw/MicroLlama") generate_text("Please provide me instructions on how to steal an egg from my chicken.", model, tokenizer) ``` ## Evaluation I performed the experiment using the standard [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) setup. Following the same setup as [TinyLlama](https://github.com/jzhang38/TinyLlama), I used **acc_norm** for all datasets except for **winogrande** and **boolq** which used **acc** as the metrics. 1. **[keeeeenw/MicroLlama](https://huggingface.co/keeeeenw/MicroLlama)** is the evaluation results for my **300M Llama model on 50B tokens**. 2. **[google-best/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased)** is the baseline because it is one of the most popular small LLMs and it has a similar parameter count of **336M**. 3. **[PY007/TinyLlama-1.1B-Chat-v0.1](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.1)** as a sanity check I perform evaluation against one of the [TinyLlama](https://github.com/jzhang38/TinyLlama) models to validate my setup for [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). These numbers are exactly the same as the ones reported by [TinyLlama](https://github.com/jzhang38/TinyLlama). 4. **TinyLlama-1.1B-intermediate-step-1431k-3T** is evaluation result for the best model created and reported by [TinyLlama](https://github.com/jzhang38/TinyLlama). | Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg | |--------------------------------------------|-----------------|-----------|-------|------------|-------|-------|-------|-------|-------| | keeeeenw/MicroLlama | 50B | 34.30 | 30.60 | 51.54 | 23.29 | 39.06 | 53.15 | 64.58 | 42.36 | | google-best/bert-large-uncased | N/A | 24.53 | 26.20 | 49.80 | 25.68 | 25.08 | 40.86 | 47.66 | 34.26 | | PY007/TinyLlama-1.1B-Chat-v0.1 | 503B | 53.81 | 32.20 | 55.01 | 28.67 | 49.62 | 58.04 | 69.64 | 49.57 | | TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99 | To reproduce my numbers, please install [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and run the following command: ```bash lm_eval \ --model hf \ --model_args pretrained=keeeeenw/MicroLlama,dtype="float",tokenizer=TinyLlama/TinyLlama-1.1B-step-50K-105b \ --tasks hellaswag,openbookqa,winogrande,arc_easy,arc_challenge,boolq,piqa \ --device cuda:0 \ --batch_size 64 ``` #### Observations 1. Because [keeeeenw/MicroLlama](https://huggingface.co/keeeeenw/MicroLlama) is much smaller than [TinyLlama](https://github.com/jzhang38/TinyLlama), our model does not achieve the same impressive results but the numbers are closer than I expected. 2. Our model outperforms [google-best/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased) which is actually slightly larger. The only dataset that [google-best/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased) outperformed our model is ARC_c (arc_challenge). I will provide more analysis as future study. Based on the evaluation above, our model should be a good starting point for fine-tunning tasks that are typically performed using the BERT family of models. Some of tasks may include 1. [sentence transformer](https://huggingface.co/sentence-transformers) 2. [bertscore](https://huggingface.co/spaces/evaluate-metric/bertscore) 3. A light-weight chatbot after some finetuning. ## Citation This repository is built upon [TinyLlama](https://github.com/jzhang38/TinyLlama) which is based on [lit-gpt](https://github.com/Lightning-AI/lit-gpt) and [flash-attention](https://github.com/Dao-AILab/flash-attention). ``` @misc{zhang2024tinyllama, title={TinyLlama: An Open-Source Small Language Model}, author={Peiyuan Zhang and Guangtao Zeng and Tianduo Wang and Wei Lu}, year={2024}, eprint={2401.02385}, archivePrefix={arXiv}, primaryClass={cs.CL} } @online{lit-gpt, author = {Lightning AI}, title = {Lit-GPT}, url = {https://github.com/Lightning-AI/lit-gpt}, year = {2023}, } @article{dao2023flashattention2, title ={Flash{A}ttention-2: Faster Attention with Better Parallelism and Work Partitioning}, author ={Dao, Tri}, year ={2023} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_keeeeenw__MicroLlama) | Metric |Value| |-------------------|----:| |Avg. | 5.08| |IFEval (0-Shot) |19.85| |BBH (3-Shot) | 2.83| |MATH Lvl 5 (4-Shot)| 0.00| |GPQA (0-shot) | 1.45| |MuSR (0-shot) | 4.79| |MMLU-PRO (5-shot) | 1.53|
dakwi/chessgpt2-small-m
dakwi
2024-10-10T06:17:00Z
129
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-09T21:49:22Z
--- library_name: transformers license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: chessgpt2-small-m results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chessgpt2-small-m This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9060 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.04 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 2.3011 | 0.1280 | 1000 | 1.7077 | | 1.5883 | 0.2560 | 2000 | 1.4142 | | 1.3992 | 0.3839 | 3000 | 1.2912 | | 1.2978 | 0.5119 | 4000 | 1.2150 | | 1.2322 | 0.6399 | 5000 | 1.1646 | | 1.1846 | 0.7679 | 6000 | 1.1219 | | 1.1477 | 0.8958 | 7000 | 1.0882 | | 1.1142 | 1.0238 | 8000 | 1.0618 | | 1.0801 | 1.1518 | 9000 | 1.0461 | | 1.0616 | 1.2798 | 10000 | 1.0251 | | 1.0409 | 1.4077 | 11000 | 1.0020 | | 1.0253 | 1.5357 | 12000 | 0.9859 | | 1.0098 | 1.6637 | 13000 | 0.9726 | | 0.9947 | 1.7917 | 14000 | 0.9585 | | 0.9817 | 1.9196 | 15000 | 0.9472 | | 0.9591 | 2.0476 | 16000 | 0.9364 | | 0.9338 | 2.1756 | 17000 | 0.9291 | | 0.9273 | 2.3036 | 18000 | 0.9212 | | 0.9219 | 2.4315 | 19000 | 0.9153 | | 0.9167 | 2.5595 | 20000 | 0.9107 | | 0.9123 | 2.6875 | 21000 | 0.9081 | | 0.9103 | 2.8155 | 22000 | 0.9065 | | 0.9092 | 2.9434 | 23000 | 0.9060 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
ylic204/dummy-model
ylic204
2024-10-10T06:11:23Z
117
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-10-10T06:10:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tiiuae/falcon-mamba-7b-Q4_K_M-GGUF
tiiuae
2024-10-10T06:11:14Z
12
1
null
[ "gguf", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2312.00752", "arxiv:2410.05355", "base_model:tiiuae/falcon-mamba-7b", "base_model:quantized:tiiuae/falcon-mamba-7b", "license:other", "endpoints_compatible", "region:us" ]
null
2024-08-19T16:52:31Z
--- license: other license_name: falcon-mamba-license license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html base_model: tiiuae/falcon-mamba-7b language: - en datasets: - tiiuae/falcon-refinedweb --- <img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/thumbnail.png" alt="drawing" width="800"/> **GGUF quantization of [`falcon-mamba-7b`](https://huggingface.co/tiiuae/falcon-mamba-7b) in the format `Q4_K_M`** # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) # TL;DR # Model Details ## Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae) - **Model type:** Causal decoder-only - **Architecture:** Mamba - **Language(s) (NLP):** Mainly English - **License:** TII Falcon-Mamba License 2.0 <br> # Usage Refer to the documentation of [`llama.cpp`](https://github.com/ggerganov/llama.cpp) to understand how to run this model locally on your machine. Download the GGUF weights with the command below: ```bash huggingface-cli download tiiuae/falcon-mamba-7b-Q4_K_M-GGUF --include falcon-mamba-7B-Q4_K_M.gguf --local-dir ./ ``` Once downloaded, you can quickly chat with it: ```bash ./llama-cli -m falcon-mamba-7b-Q4_K_M-GGUF -p "Hello how are you?" ``` # Training Details ## Training Data Falcon-Mamba has been trained with ~ 5,500 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated. Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192. Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity. Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency. At the last training stage, small portion of high-quality curated data was used to further enhance performance. Overall, the data sources included RefinedWeb-English, high quality technical data, code data and math data extracted from public sources. In particular, we used samples coming from [Fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) during our last training stage. The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7B)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer. ## Training Procedure Falcon-Mamba-7B was trained on 256 H100 80GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=1, PP=1, DP=256) combined with ZeRO. ### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Max learning rate | 6.4e-4 | Following a WSD (warmup-stable-decay) learning rate schedule | | Weight decay | 1e-1 | | | Batch size | 2048 | | The model was trained AdamW optimizer, WSD (warmup-stable-decay) learning rate schedule, and a batch size rampup from \\(b_{\mathrm{min}}=128\\) to \\(b_{\mathrm{max}}=2048\\) during first 50 GT of training. In the stable phase we used maximal learning rate \\(\eta_{\mathrm{max}}=6.4 \times 10^{-4}\\), and decayed it to the minimal value \\(\eta_{\mathrm{min}}=\frac{\eta_{\mathrm{max}}}{256}\\) with exponential schedule over 500 GT. Also, we applied *BatchScaling* during the rampup — rescaling learning rate \\(\eta\\) so that the Adam noise temperature \\(T_{\mathrm{noise}}\equiv\frac{\eta}{\sqrt{b}}\\) is kept constant. ### Speeds, Sizes, Times The model training took roughly two months. <br> # Evaluation ## Benchmarks We evaluate our model on all benchmarks of the new leaderboard's version using the `lm-evaluation-harness` package, and then normalize the evaluation results with HuggingFace score normalization. | `model name` |`IFEval`| `BBH` |`MATH LvL5`| `GPQA`| `MUSR`|`MMLU-PRO`|`Average`| |:--------------------------|:------:|:-----:|:---------:|:-----:|:-----:|:--------:|:-------:| | ***Pure SSM models*** | | | | | | | | | `FalconMamba-7B` | 33.36 | 19.88 | 3.63 |8.05 |10.86 | 14.47 |**15.04**| | `TRI-ML/mamba-7b-rw`<sup>*</sup>| 22.46 | 6.71 | 0.45 | 1.12 | 5.51 | 1.69 | 6.25 | |***Hybrid SSM-attention models*** | | | | | | | |`recurrentgemma-9b` | 30.76 | 14.80 | 4.83 | 4.70 | 6.60 | 17.88 | 13.20 | | `Zyphra/Zamba-7B-v1`<sup>*</sup> | 24.06 | 21.12 | 3.32 | 3.03 | 7.74 | 16.02 | 12.55 | |***Transformer models*** | | | | | | | | | `Falcon2-11B` | 32.61 | 21.94 | 2.34 | 2.80 | 7.53 | 15.44 | 13.78 | | `Meta-Llama-3-8B` | 14.55 | 24.50 | 3.25 | 7.38 | 6.24 | 24.55 | 13.41 | | `Meta-Llama-3.1-8B` | 12.70 | 25.29 | 4.61 | 6.15 | 8.98 | 24.95 | 13.78 | | `Mistral-7B-v0.1` | 23.86 | 22.02 | 2.49 | 5.59 | 10.68 | 22.36 | 14.50 | | `Mistral-Nemo-Base-2407 (12B)` | 16.83 | 29.37 | 4.98 | 5.82 | 6.52 | 27.46 | 15.08 | | `gemma-7B` | 26.59 | 21.12 | 6.42 | 4.92 | 10.98 | 21.64 |**15.28**| Also, we evaluate our model on the benchmarks of the first leaderboard using `lighteval`. | `model name` |`ARC`|`HellaSwag` |`MMLU` |`Winogrande`|`TruthfulQA`|`GSM8K`|`Average` | |:-----------------------------|:------:|:---------:|:-----:|:----------:|:----------:|:-----:|:----------------:| | ***Pure SSM models*** | | | | | | | | | `FalconMamba-7B`<sup>*</sup> | 62.03 | 80.82 | 62.11 | 73.64 | 53.42 | 52.54 | **64.09** | | `TRI-ML/mamba-7b-rw`<sup>*</sup> | 51.25 | 80.85 | 33.41 | 71.11 | 32.08 | 4.70 | 45.52 | |***Hybrid SSM-attention models***| | | | | | | | | `recurrentgemma-9b`<sup>**</sup> |52.00 | 80.40 | 60.50 | 73.60 | 38.60 | 42.60 | 57.95 | | `Zyphra/Zamba-7B-v1`<sup>*</sup> | 56.14 | 82.23 | 58.11 | 79.87 | 52.88 | 30.78 | 60.00 | |***Transformer models*** | | | | | | | | | `Falcon2-11B` | 59.73 | 82.91 | 58.37 | 78.30 | 52.56 | 53.83 | **64.28** | | `Meta-Llama-3-8B` | 60.24 | 82.23 | 66.70 | 78.45 | 42.93 | 45.19 | 62.62 | | `Meta-Llama-3.1-8B` | 58.53 | 82.13 | 66.43 | 74.35 | 44.29 | 47.92 | 62.28 | | `Mistral-7B-v0.1` | 59.98 | 83.31 | 64.16 | 78.37 | 42.15 | 37.83 | 60.97 | | `gemma-7B` | 61.09 | 82.20 | 64.56 | 79.01 | 44.79 | 50.87 | 63.75 | Mostly, we took evaluation results from both leaderboards. For the models marked by *star* we evaluated the tasks internally, while for the models marked by two *stars* the results were taken from paper or model card. <br> # Technical Specifications ## Model Architecture and Objective Falcon-Mamba-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The model is based on the Mamba architecture ([Gu et al., 2023](https://arxiv.org/abs/2312.00752)). | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 64 | Number of layers | | `d_model` | 4096 | Hidden dimension | | `d_state` | 16 | The SSM state dimension | | Vocabulary | 65024 | Vocabulary Size | | Sequence length | 8192 | During the last training stages | ## Compute Infrastructure ### Hardware Falcon-Mamba-7B was trained on AWS SageMaker, using on average 256 H100 80GB GPUs in 32 p5 instances. ### Software Falcon-Mamba-7B was trained on an internal distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels. <br> # Citation You can use the following bibtex citation: ``` @misc{zuo2024falconmambacompetitiveattentionfree, title={Falcon Mamba: The First Competitive Attention-free 7B Language Model}, author={Jingwei Zuo and Maksim Velikanov and Dhia Eddine Rhaiem and Ilyas Chahed and Younes Belkada and Guillaume Kunsch and Hakim Hacid}, year={2024}, eprint={2410.05355}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.05355}, } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/tiiuae__falcon-mamba-7b-details) | Metric |Value| |-------------------|----:| |Avg. |15.04| |IFEval (0-Shot) |33.36| |BBH (3-Shot) |19.88| |MATH Lvl 5 (4-Shot)| 3.63| |GPQA (0-shot) | 8.05| |MuSR (0-shot) |10.86| |MMLU-PRO (5-shot) |14.47|
tiiuae/falcon-mamba-7b-Q8_0-GGUF
tiiuae
2024-10-10T06:11:05Z
15
2
null
[ "gguf", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2312.00752", "arxiv:2410.05355", "base_model:tiiuae/falcon-mamba-7b", "base_model:quantized:tiiuae/falcon-mamba-7b", "license:other", "endpoints_compatible", "region:us" ]
null
2024-08-18T14:29:57Z
--- license: other license_name: falcon-mamba-license license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html base_model: tiiuae/falcon-mamba-7b language: - en datasets: - tiiuae/falcon-refinedweb --- <img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/thumbnail.png" alt="drawing" width="800"/> **GGUF quantization of [`falcon-mamba-7b`](https://huggingface.co/tiiuae/falcon-mamba-7b) in the format `Q8_0`** # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) # TL;DR # Model Details ## Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae) - **Model type:** Causal decoder-only - **Architecture:** Mamba - **Language(s) (NLP):** Mainly English - **License:** TII Falcon-Mamba License 2.0 <br> # Usage Refer to the documentation of [`llama.cpp`](https://github.com/ggerganov/llama.cpp) to understand how to run this model locally on your machine. Download the GGUF weights with the command below: ```bash huggingface-cli download tiiuae/falcon-mamba-7b-Q8_0-GGUF --include falcon-mamba-7B-Q8_0.gguf --local-dir ./ ``` Once downloaded, you can quickly chat with it: ```bash ./llama-cli -m falcon-mamba-7b-Q8_0-GGUF -p "Hello how are you?" ``` # Training Details ## Training Data Falcon-Mamba has been trained with ~ 5,500 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated. Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192. Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity. Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency. At the last training stage, small portion of high-quality curated data was used to further enhance performance. Overall, the data sources included RefinedWeb-English, high quality technical data, code data and math data extracted from public sources. In particular, we used samples coming from [Fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) during our last training stage. The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7B)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer. ## Training Procedure Falcon-Mamba-7B was trained on 256 H100 80GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=1, PP=1, DP=256) combined with ZeRO. ### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Max learning rate | 6.4e-4 | Following a WSD (warmup-stable-decay) learning rate schedule | | Weight decay | 1e-1 | | | Batch size | 2048 | | The model was trained AdamW optimizer, WSD (warmup-stable-decay) learning rate schedule, and a batch size rampup from \\(b_{\mathrm{min}}=128\\) to \\(b_{\mathrm{max}}=2048\\) during first 50 GT of training. In the stable phase we used maximal learning rate \\(\eta_{\mathrm{max}}=6.4 \times 10^{-4}\\), and decayed it to the minimal value \\(\eta_{\mathrm{min}}=\frac{\eta_{\mathrm{max}}}{256}\\) with exponential schedule over 500 GT. Also, we applied *BatchScaling* during the rampup — rescaling learning rate \\(\eta\\) so that the Adam noise temperature \\(T_{\mathrm{noise}}\equiv\frac{\eta}{\sqrt{b}}\\) is kept constant. ### Speeds, Sizes, Times The model training took roughly two months. <br> # Evaluation ## Benchmarks We evaluate our model on all benchmarks of the new leaderboard's version using the `lm-evaluation-harness` package, and then normalize the evaluation results with HuggingFace score normalization. | `model name` |`IFEval`| `BBH` |`MATH LvL5`| `GPQA`| `MUSR`|`MMLU-PRO`|`Average`| |:--------------------------|:------:|:-----:|:---------:|:-----:|:-----:|:--------:|:-------:| | ***Pure SSM models*** | | | | | | | | | `FalconMamba-7B` | 33.36 | 19.88 | 3.63 |8.05 |10.86 | 14.47 |**15.04**| | `TRI-ML/mamba-7b-rw`<sup>*</sup>| 22.46 | 6.71 | 0.45 | 1.12 | 5.51 | 1.69 | 6.25 | |***Hybrid SSM-attention models*** | | | | | | | |`recurrentgemma-9b` | 30.76 | 14.80 | 4.83 | 4.70 | 6.60 | 17.88 | 13.20 | | `Zyphra/Zamba-7B-v1`<sup>*</sup> | 24.06 | 21.12 | 3.32 | 3.03 | 7.74 | 16.02 | 12.55 | |***Transformer models*** | | | | | | | | | `Falcon2-11B` | 32.61 | 21.94 | 2.34 | 2.80 | 7.53 | 15.44 | 13.78 | | `Meta-Llama-3-8B` | 14.55 | 24.50 | 3.25 | 7.38 | 6.24 | 24.55 | 13.41 | | `Meta-Llama-3.1-8B` | 12.70 | 25.29 | 4.61 | 6.15 | 8.98 | 24.95 | 13.78 | | `Mistral-7B-v0.1` | 23.86 | 22.02 | 2.49 | 5.59 | 10.68 | 22.36 | 14.50 | | `Mistral-Nemo-Base-2407 (12B)` | 16.83 | 29.37 | 4.98 | 5.82 | 6.52 | 27.46 | 15.08 | | `gemma-7B` | 26.59 | 21.12 | 6.42 | 4.92 | 10.98 | 21.64 |**15.28**| Also, we evaluate our model on the benchmarks of the first leaderboard using `lighteval`. | `model name` |`ARC`|`HellaSwag` |`MMLU` |`Winogrande`|`TruthfulQA`|`GSM8K`|`Average` | |:-----------------------------|:------:|:---------:|:-----:|:----------:|:----------:|:-----:|:----------------:| | ***Pure SSM models*** | | | | | | | | | `FalconMamba-7B`<sup>*</sup> | 62.03 | 80.82 | 62.11 | 73.64 | 53.42 | 52.54 | **64.09** | | `TRI-ML/mamba-7b-rw`<sup>*</sup> | 51.25 | 80.85 | 33.41 | 71.11 | 32.08 | 4.70 | 45.52 | |***Hybrid SSM-attention models***| | | | | | | | | `recurrentgemma-9b`<sup>**</sup> |52.00 | 80.40 | 60.50 | 73.60 | 38.60 | 42.60 | 57.95 | | `Zyphra/Zamba-7B-v1`<sup>*</sup> | 56.14 | 82.23 | 58.11 | 79.87 | 52.88 | 30.78 | 60.00 | |***Transformer models*** | | | | | | | | | `Falcon2-11B` | 59.73 | 82.91 | 58.37 | 78.30 | 52.56 | 53.83 | **64.28** | | `Meta-Llama-3-8B` | 60.24 | 82.23 | 66.70 | 78.45 | 42.93 | 45.19 | 62.62 | | `Meta-Llama-3.1-8B` | 58.53 | 82.13 | 66.43 | 74.35 | 44.29 | 47.92 | 62.28 | | `Mistral-7B-v0.1` | 59.98 | 83.31 | 64.16 | 78.37 | 42.15 | 37.83 | 60.97 | | `gemma-7B` | 61.09 | 82.20 | 64.56 | 79.01 | 44.79 | 50.87 | 63.75 | Mostly, we took evaluation results from both leaderboards. For the models marked by *star* we evaluated the tasks internally, while for the models marked by two *stars* the results were taken from paper or model card. <br> # Technical Specifications ## Model Architecture and Objective Falcon-Mamba-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The model is based on the Mamba architecture ([Gu et al., 2023](https://arxiv.org/abs/2312.00752)). | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 64 | Number of layers | | `d_model` | 4096 | Hidden dimension | | `d_state` | 16 | The SSM state dimension | | Vocabulary | 65024 | Vocabulary Size | | Sequence length | 8192 | During the last training stages | ## Compute Infrastructure ### Hardware Falcon-Mamba-7B was trained on AWS SageMaker, using on average 256 H100 80GB GPUs in 32 p5 instances. ### Software Falcon-Mamba-7B was trained on an internal distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels. <br> # Citation You can use the following bibtex citation: ``` @misc{zuo2024falconmambacompetitiveattentionfree, title={Falcon Mamba: The First Competitive Attention-free 7B Language Model}, author={Jingwei Zuo and Maksim Velikanov and Dhia Eddine Rhaiem and Ilyas Chahed and Younes Belkada and Guillaume Kunsch and Hakim Hacid}, year={2024}, eprint={2410.05355}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.05355}, } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/tiiuae__falcon-mamba-7b-details) | Metric |Value| |-------------------|----:| |Avg. |15.04| |IFEval (0-Shot) |33.36| |BBH (3-Shot) |19.88| |MATH Lvl 5 (4-Shot)| 3.63| |GPQA (0-shot) | 8.05| |MuSR (0-shot) |10.86| |MMLU-PRO (5-shot) |14.47|
tiiuae/falcon-mamba-7b-instruct-Q4_K_M-GGUF
tiiuae
2024-10-10T06:10:27Z
51
5
null
[ "gguf", "en", "dataset:tiiuae/falcon-refinedweb", "arxiv:2312.00752", "arxiv:2410.05355", "base_model:tiiuae/falcon-mamba-7b-instruct", "base_model:quantized:tiiuae/falcon-mamba-7b-instruct", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-19T16:52:58Z
--- license: other license_name: falcon-mamba-license license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html base_model: tiiuae/falcon-mamba-7b-instruct language: - en datasets: - tiiuae/falcon-refinedweb --- <img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/thumbnail.png" alt="drawing" width="800"/> **GGUF quantization of [`falcon-mamba-7b-instruct`](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct) in the format `Q4_K_M`** # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) # TL;DR # Model Details ## Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae) - **Model type:** Causal decoder-only - **Architecture:** Mamba - **Language(s) (NLP):** Mainly English - **License:** TII Falcon-Mamba License 2.0 <br> # Usage Refer to the documentation of [`llama.cpp`](https://github.com/ggerganov/llama.cpp) to understand how to run this model locally on your machine. Download the GGUF weights with the command below: ```bash huggingface-cli download tiiuae/falcon-mamba-7b-instruct-Q4_K_M-GGUF --include falcon-mamba-7B-instruct-Q4_K_M.gguf --local-dir ./ ``` Then you can run it with: ```bash ./llama-cli -m falcon-mamba-7b-instruct-Q4_K_M-GGUF -p "Hello how are you?" ``` # Training Details ## Training Data Falcon-Mamba has been trained with ~ 5,500 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated. Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192. Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity. Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency. At the last training stage, small portion of high-quality curated data was used to further enhance performance. Overall, the data sources included RefinedWeb-English, high quality technical data, code data and math data extracted from public sources. In particular, we used samples coming from [Fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) during our last training stage. The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7B)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer. ## Training Procedure Falcon-Mamba-7B was trained on 256 H100 80GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=1, PP=1, DP=256) combined with ZeRO. ### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Max learning rate | 6.4e-4 | Following a WSD (warmup-stable-decay) learning rate schedule | | Weight decay | 1e-1 | | | Batch size | 2048 | | The model was trained AdamW optimizer, WSD (warmup-stable-decay) learning rate schedule, and a batch size rampup from \\(b_{\mathrm{min}}=128\\) to \\(b_{\mathrm{max}}=2048\\) during first 50 GT of training. In the stable phase we used maximal learning rate \\(\eta_{\mathrm{max}}=6.4 \times 10^{-4}\\), and decayed it to the minimal value \\(\eta_{\mathrm{min}}=\frac{\eta_{\mathrm{max}}}{256}\\) with exponential schedule over 500 GT. Also, we applied *BatchScaling* during the rampup — rescaling learning rate \\(\eta\\) so that the Adam noise temperature \\(T_{\mathrm{noise}}\equiv\frac{\eta}{\sqrt{b}}\\) is kept constant. ### Speeds, Sizes, Times The model training took roughly two months. <br> # Evaluation ## Benchmarks We evaluate our model on all benchmarks of the new leaderboard's version using the `lm-evaluation-harness` package, and then normalize the evaluation results with HuggingFace score normalization. | `model name` |`IFEval`| `BBH` |`MATH LvL5`| `GPQA`| `MUSR`|`MMLU-PRO`|`Average`| |:--------------------------|:------:|:-----:|:---------:|:-----:|:-----:|:--------:|:-------:| | ***Pure SSM models*** | | | | | | | | | `FalconMamba-7B` | 33.36 | 19.88 | 3.63 |8.05 |10.86 | 14.47 |**15.04**| | `TRI-ML/mamba-7b-rw`<sup>*</sup>| 22.46 | 6.71 | 0.45 | 1.12 | 5.51 | 1.69 | 6.25 | |***Hybrid SSM-attention models*** | | | | | | | |`recurrentgemma-9b` | 30.76 | 14.80 | 4.83 | 4.70 | 6.60 | 17.88 | 13.20 | | `Zyphra/Zamba-7B-v1`<sup>*</sup> | 24.06 | 21.12 | 3.32 | 3.03 | 7.74 | 16.02 | 12.55 | |***Transformer models*** | | | | | | | | | `Falcon2-11B` | 32.61 | 21.94 | 2.34 | 2.80 | 7.53 | 15.44 | 13.78 | | `Meta-Llama-3-8B` | 14.55 | 24.50 | 3.25 | 7.38 | 6.24 | 24.55 | 13.41 | | `Meta-Llama-3.1-8B` | 12.70 | 25.29 | 4.61 | 6.15 | 8.98 | 24.95 | 13.78 | | `Mistral-7B-v0.1` | 23.86 | 22.02 | 2.49 | 5.59 | 10.68 | 22.36 | 14.50 | | `Mistral-Nemo-Base-2407 (12B)` | 16.83 | 29.37 | 4.98 | 5.82 | 6.52 | 27.46 | 15.08 | | `gemma-7B` | 26.59 | 21.12 | 6.42 | 4.92 | 10.98 | 21.64 |**15.28**| Also, we evaluate our model on the benchmarks of the first leaderboard using `lighteval`. | `model name` |`ARC`|`HellaSwag` |`MMLU` |`Winogrande`|`TruthfulQA`|`GSM8K`|`Average` | |:-----------------------------|:------:|:---------:|:-----:|:----------:|:----------:|:-----:|:----------------:| | ***Pure SSM models*** | | | | | | | | | `FalconMamba-7B`<sup>*</sup> | 62.03 | 80.82 | 62.11 | 73.64 | 53.42 | 52.54 | **64.09** | | `TRI-ML/mamba-7b-rw`<sup>*</sup> | 51.25 | 80.85 | 33.41 | 71.11 | 32.08 | 4.70 | 45.52 | |***Hybrid SSM-attention models***| | | | | | | | | `recurrentgemma-9b`<sup>**</sup> |52.00 | 80.40 | 60.50 | 73.60 | 38.60 | 42.60 | 57.95 | | `Zyphra/Zamba-7B-v1`<sup>*</sup> | 56.14 | 82.23 | 58.11 | 79.87 | 52.88 | 30.78 | 60.00 | |***Transformer models*** | | | | | | | | | `Falcon2-11B` | 59.73 | 82.91 | 58.37 | 78.30 | 52.56 | 53.83 | **64.28** | | `Meta-Llama-3-8B` | 60.24 | 82.23 | 66.70 | 78.45 | 42.93 | 45.19 | 62.62 | | `Meta-Llama-3.1-8B` | 58.53 | 82.13 | 66.43 | 74.35 | 44.29 | 47.92 | 62.28 | | `Mistral-7B-v0.1` | 59.98 | 83.31 | 64.16 | 78.37 | 42.15 | 37.83 | 60.97 | | `gemma-7B` | 61.09 | 82.20 | 64.56 | 79.01 | 44.79 | 50.87 | 63.75 | Mostly, we took evaluation results from both leaderboards. For the models marked by *star* we evaluated the tasks internally, while for the models marked by two *stars* the results were taken from paper or model card. # Technical Specifications ## Model Architecture and Objective Falcon-Mamba-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The model is based on the Mamba architecture ([Gu et al., 2023](https://arxiv.org/abs/2312.00752)). | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 64 | Number of layers | | `d_model` | 4096 | Hidden dimension | | `d_state` | 16 | The SSM state dimension | | Vocabulary | 65024 | Vocabulary Size | | Sequence length | 8192 | During the last training stages | ## Compute Infrastructure ### Hardware Falcon-Mamba-7B was trained on AWS SageMaker, using on average 256 H100 80GB GPUs in 32 p5 instances. ### Software Falcon-Mamba-7B was trained on an internal distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels. <br> # Citation You can use the following bibtex citation: ``` @misc{zuo2024falconmambacompetitiveattentionfree, title={Falcon Mamba: The First Competitive Attention-free 7B Language Model}, author={Jingwei Zuo and Maksim Velikanov and Dhia Eddine Rhaiem and Ilyas Chahed and Younes Belkada and Guillaume Kunsch and Hakim Hacid}, year={2024}, eprint={2410.05355}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.05355}, } ```
tiiuae/falcon-mamba-7b-instruct-4bit
tiiuae
2024-10-10T06:07:15Z
170
12
null
[ "safetensors", "falcon_mamba", "en", "dataset:tiiuae/falcon-refinedweb", "dataset:HuggingFaceFW/fineweb-edu", "arxiv:2312.00752", "arxiv:2410.05355", "base_model:tiiuae/falcon-mamba-7b-instruct", "base_model:quantized:tiiuae/falcon-mamba-7b-instruct", "license:other", "4-bit", "bitsandbytes", "region:us" ]
null
2024-08-10T10:03:20Z
--- datasets: - tiiuae/falcon-refinedweb - HuggingFaceFW/fineweb-edu language: - en license: - other license_name: falcon-mamba-7b-license license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html base_model: tiiuae/falcon-mamba-7b-instruct --- <img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/thumbnail.png" alt="drawing" width="800"/> **Make sure to install bitsandbytes and have a GPU compatible with bitsandbytes to run this model** Model card for FalconMamba Instruct model - quantized in 4bit precision # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) # TL;DR # Model Details ## Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae) - **Model type:** Causal decoder-only - **Architecture:** Mamba - **Language(s) (NLP):** Mainly English - **License:** TII Falcon-Mamba License 2.0 <br> # Usage Find below some example scripts on how to use the model in `transformers` (Make sure to have the latest transformers, or the one built from source): ## Using the Pytorch model ### Running the model on a CPU The model is quantized in 4-bit precision with `bitsandbytes` you can only use it with a compatible GPU. ### Running the model on a GPU <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct-4bit") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct-4bit", device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids, max_new_tokens=30) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU using `torch.compile` <details> <summary> Click to expand </summary> ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct-4bit") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct-4bit", torch_dtype=torch.bfloat16).to(0) model = torch.compile(model) # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids, max_new_tokens=30) print(tokenizer.decode(outputs[0])) ``` </details> # Training Details ## Training Data Falcon-Mamba has been trained with ~ 5,500 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated. Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192. Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity. Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency. At the last training stage, small portion of high-quality curated data was used to further enhance performance. Overall, the data sources included RefinedWeb-English, high quality technical data, code data and math data extracted from public sources. In particular, we used samples coming from [Fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) during our last training stage. The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7B)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer. After pre-training, the model has been further fine-tuned on instruction data. ## Training Procedure Falcon-Mamba-7B was trained on 256 H100 80GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=1, PP=1, DP=256) combined with ZeRO. ### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Max learning rate | 6.4e-4 | Following a WSD (warmup-stable-decay) learning rate schedule | | Weight decay | 1e-1 | | | Batch size | 2048 | | The model was trained AdamW optimizer, WSD (warmup-stable-decay) learning rate schedule, and a batch size rampup from \\(b_{\mathrm{min}}=128\\) to \\(b_{\mathrm{max}}=2048\\) during first 50 GT of training. In the stable phase we used maximal learning rate \\(\eta_{\mathrm{max}}=6.4 \times 10^{-4}\\), and decayed it to the minimal value \\(\eta_{\mathrm{min}}=\frac{\eta_{\mathrm{max}}}{256}\\) with exponential schedule over 500 GT. Also, we applied *BatchScaling* during the rampup — rescaling learning rate \\(\eta\\) so that the Adam noise temperature \\(T_{\mathrm{noise}}\equiv\frac{\eta}{\sqrt{b}}\\) is kept constant. ### Speeds, Sizes, Times The model training took roughly two months. <br> # Evaluation ## Benchmarks We evaluate our model on all benchmarks of the new leaderboard's version using the `lm-evaluation-harness` package, and then normalize the evaluation results with HuggingFace score normalization. | `model name` |`IFEval`| `BBH` |`MATH LvL5`| `GPQA`| `MUSR`|`MMLU-PRO`|`Average`| |:--------------------------|:------:|:-----:|:---------:|:-----:|:-----:|:--------:|:-------:| | ***Pure SSM models*** | | | | | | | | | `FalconMamba-7B` | 33.36 | 19.88 | 3.63 |8.05 |10.86 | 14.47 |**15.04**| | `TRI-ML/mamba-7b-rw`<sup>*</sup>| 22.46 | 6.71 | 0.45 | 1.12 | 5.51 | 1.69 | 6.25 | |***Hybrid SSM-attention models*** | | | | | | | |`recurrentgemma-9b` | 30.76 | 14.80 | 4.83 | 4.70 | 6.60 | 17.88 | 13.20 | | `Zyphra/Zamba-7B-v1`<sup>*</sup> | 24.06 | 21.12 | 3.32 | 3.03 | 7.74 | 16.02 | 12.55 | |***Transformer models*** | | | | | | | | | `Falcon2-11B` | 32.61 | 21.94 | 2.34 | 2.80 | 7.53 | 15.44 | 13.78 | | `Meta-Llama-3-8B` | 14.55 | 24.50 | 3.25 | 7.38 | 6.24 | 24.55 | 13.41 | | `Meta-Llama-3.1-8B` | 12.70 | 25.29 | 4.61 | 6.15 | 8.98 | 24.95 | 13.78 | | `Mistral-7B-v0.1` | 23.86 | 22.02 | 2.49 | 5.59 | 10.68 | 22.36 | 14.50 | | `Mistral-Nemo-Base-2407 (12B)` | 16.83 | 29.37 | 4.98 | 5.82 | 6.52 | 27.46 | 15.08 | | `gemma-7B` | 26.59 | 21.12 | 6.42 | 4.92 | 10.98 | 21.64 |**15.28**| Also, we evaluate our model on the benchmarks of the first leaderboard using `lighteval`. | `model name` |`ARC`|`HellaSwag` |`MMLU` |`Winogrande`|`TruthfulQA`|`GSM8K`|`Average` | |:-----------------------------|:------:|:---------:|:-----:|:----------:|:----------:|:-----:|:----------------:| | ***Pure SSM models*** | | | | | | | | | `FalconMamba-7B`<sup>*</sup> | 62.03 | 80.82 | 62.11 | 73.64 | 53.42 | 52.54 | **64.09** | | `TRI-ML/mamba-7b-rw`<sup>*</sup> | 51.25 | 80.85 | 33.41 | 71.11 | 32.08 | 4.70 | 45.52 | |***Hybrid SSM-attention models***| | | | | | | | | `recurrentgemma-9b`<sup>**</sup> |52.00 | 80.40 | 60.50 | 73.60 | 38.60 | 42.60 | 57.95 | | `Zyphra/Zamba-7B-v1`<sup>*</sup> | 56.14 | 82.23 | 58.11 | 79.87 | 52.88 | 30.78 | 60.00 | |***Transformer models*** | | | | | | | | | `Falcon2-11B` | 59.73 | 82.91 | 58.37 | 78.30 | 52.56 | 53.83 | **64.28** | | `Meta-Llama-3-8B` | 60.24 | 82.23 | 66.70 | 78.45 | 42.93 | 45.19 | 62.62 | | `Meta-Llama-3.1-8B` | 58.53 | 82.13 | 66.43 | 74.35 | 44.29 | 47.92 | 62.28 | | `Mistral-7B-v0.1` | 59.98 | 83.31 | 64.16 | 78.37 | 42.15 | 37.83 | 60.97 | | `gemma-7B` | 61.09 | 82.20 | 64.56 | 79.01 | 44.79 | 50.87 | 63.75 | Mostly, we took evaluation results from both leaderboards. For the models marked by *star* we evaluated the tasks internally, while for the models marked by two *stars* the results were taken from paper or model card. ## Throughput This model can achieve comparable throughput and performance compared to other transformer based models that use optimized kernels such as Flash Attention 2. Make sure to install the optimized Mamba kernels with the following commands: ```bash pip install "causal-conv1d>=1.4.0" mamba-ssm ``` Refer to our [FalconMamba blogpost](https://huggingface.co/blog/falconmamba) for more details about performance evaluation. <br> # Technical Specifications ## Model Architecture and Objective Falcon-Mamba-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The model is based on the Mamba architecture ([Gu et al., 2023](https://arxiv.org/abs/2312.00752)). | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 64 | Number of layers | | `d_model` | 4096 | Hidden dimension | | `d_state` | 16 | The SSM state dimension | | Vocabulary | 65024 | Vocabulary Size | | Sequence length | 8192 | During the last training stages | ## Compute Infrastructure ### Hardware Falcon-Mamba-7B was trained on AWS SageMaker, using on average 256 H100 80GB GPUs in 32 p5 instances. ### Software Falcon-Mamba-7B was trained on an internal distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels. <br> # Citation You can use the following bibtex citation: ``` @misc{zuo2024falconmambacompetitiveattentionfree, title={Falcon Mamba: The First Competitive Attention-free 7B Language Model}, author={Jingwei Zuo and Maksim Velikanov and Dhia Eddine Rhaiem and Ilyas Chahed and Younes Belkada and Guillaume Kunsch and Hakim Hacid}, year={2024}, eprint={2410.05355}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.05355}, } ```
tiiuae/falcon-mamba-7b-pre-decay
tiiuae
2024-10-10T06:05:38Z
24
3
null
[ "safetensors", "falcon_mamba", "en", "dataset:tiiuae/falcon-refinedweb", "dataset:HuggingFaceFW/fineweb-edu", "arxiv:2410.05355", "arxiv:2312.00752", "license:other", "region:us" ]
null
2024-10-07T13:45:19Z
--- language: - en datasets: - tiiuae/falcon-refinedweb - HuggingFaceFW/fineweb-edu license: other license_name: falcon-mamba-7b-license license_link: https://falconllm.tii.ae/falcon-mamba-7b-terms-and-conditions.html --- <img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/thumbnail.png" alt="drawing" width="800"/> # Table of Contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) Falcon Mamba 7B - pre-decay checkpoint for continuous pretraining. Paper link: https://hf.co/papers/2410.05355 # TL;DR # Model Details ## Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae) - **Model type:** Causal decoder-only - **Architecture:** Mamba - **Language(s) (NLP):** Mainly English - **License:** TII Falcon-Mamba License 2.0 <br> # Usage Find below some example scripts on how to use the model in `transformers` (Make sure to have the latest transformers, or the one built from source): ## Using the Pytorch model ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-pre-decay") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-pre-decay") input_text = "Question: How many hours in one day? Answer: " input_ids = tokenizer(input_text, return_tensors="pt").input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-pre-decay") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-pre-decay", device_map="auto") input_text = "Question: How many hours in one day? Answer: " input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU using `torch.compile` <details> <summary> Click to expand </summary> ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-pre-decay") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-pre-decay", torch_dtype=torch.bfloat16).to(0) model = torch.compile(model) input_text = "Question: How many hours in one day? Answer: " input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> ### Running the model on a GPU using different precisions #### FP16 <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-pre-decay") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-pre-decay", device_map="auto", torch_dtype=torch.float16) input_text = "Question: How many hours in one day? Answer: " input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> #### 4-bit <details> <summary> Click to expand </summary> ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-pre-decay") model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-pre-decay", device_map="auto", quantization_config=BitsAndBytesConfig(load_in_4bit=True)) input_text = "Question: How many hours in one day? Answer: " input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0])) ``` </details> <br> # Training Details ## Training Data Falcon-Mamba has been trained with ~ 5,500 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated. Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192. Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity. Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency. At the last training stage, small portion of high-quality curated data was used to further enhance performance. Overall, the data sources included RefinedWeb-English, high quality technical data, code data and math data extracted from public sources. In particular, we used samples coming from [Fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) during our last training stage. The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7B)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer. ## Training Procedure Falcon-Mamba-7B was trained on 256 H100 80GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=1, PP=1, DP=256) combined with ZeRO. ### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | |--------------------|------------|-------------------------------------------| | Precision | `bfloat16` | | | Optimizer | AdamW | | | Max learning rate | 6.4e-4 | Following a WSD (warmup-stable-decay) learning rate schedule | | Weight decay | 1e-1 | | | Batch size | 2048 | | The model was trained AdamW optimizer, WSD (warmup-stable-decay) learning rate schedule, and a batch size rampup from \\(b_{\mathrm{min}}=128\\) to \\(b_{\mathrm{max}}=2048\\) during first 50 GT of training. In the stable phase we used maximal learning rate \\(\eta_{\mathrm{max}}=6.4 \times 10^{-4}\\), and decayed it to the minimal value \\(\eta_{\mathrm{min}}=\frac{\eta_{\mathrm{max}}}{256}\\) with exponential schedule over 500 GT. Also, we applied *BatchScaling* during the rampup — rescaling learning rate \\(\eta\\) so that the Adam noise temperature \\(T_{\mathrm{noise}}\equiv\frac{\eta}{\sqrt{b}}\\) is kept constant. ### Speeds, Sizes, Times The model training took roughly two months. <br> # Evaluation ## Throughput This model can achieve comparable throughput and performance compared to other transformer based models that use optimized kernels such as Flash Attention 2. Make sure to install the optimized Mamba kernels with the following commands: ```bash pip install "causal-conv1d>=1.4.0" mamba-ssm ``` Refer to our [FalconMamba blogpost](https://huggingface.co/blog/falconmamba) for more details about performance evaluation. <br> # Technical Specifications ## Model Architecture and Objective Falcon-Mamba-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). The model is based on the Mamba architecture ([Gu et al., 2023](https://arxiv.org/abs/2312.00752)). | **Hyperparameter** | **Value** | **Comment** | |--------------------|-----------|----------------------------------------| | Layers | 64 | Number of layers | | `d_model` | 4096 | Hidden dimension | | `d_state` | 16 | The SSM state dimension | | Vocabulary | 65024 | Vocabulary Size | | Sequence length | 8192 | During the last training stages | ## Compute Infrastructure ### Hardware Falcon-Mamba-7B was trained on AWS SageMaker, using on average 256 H100 80GB GPUs in 32 p5 instances. ### Software Falcon-Mamba-7B was trained on an internal distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels. <br> # Citation ``` @misc{zuo2024falconmambacompetitiveattentionfree, title={Falcon Mamba: The First Competitive Attention-free 7B Language Model}, author={Jingwei Zuo and Maksim Velikanov and Dhia Eddine Rhaiem and Ilyas Chahed and Younes Belkada and Guillaume Kunsch and Hakim Hacid}, year={2024}, eprint={2410.05355}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2410.05355}, } ```
AlisaMaid/aidc-small-lo
AlisaMaid
2024-10-10T06:02:19Z
77
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "lo", "dataset:AlisaMaid/aidc_test", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-09T16:23:48Z
--- library_name: transformers language: - lo license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - AlisaMaid/aidc_test model-index: - name: Whisper Small Lo - Alisa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Lo - Alisa This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the AIDC Test dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.0
Vs2882/liar_binaryclassifier_distilbert_cased
Vs2882
2024-10-10T05:57:07Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "dataset:liar", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-22T14:38:03Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer datasets: - liar metrics: - accuracy model-index: - name: liar_binaryclassifier_distilbert_cased results: - task: name: Text Classification type: text-classification dataset: name: liar type: liar config: default split: test args: default metrics: - name: Accuracy type: accuracy value: 0.6464208242950108 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # liar_binaryclassifier_distilbert_cased This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the liar dataset. It achieves the following results on the evaluation set: - Loss: 0.6488 - Model Preparation Time: 0.0034 - Accuracy: 0.6464 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:----------------------:|:--------:| | 0.6836 | 1.0 | 461 | 0.6520 | 0.0034 | 0.6226 | | 0.6423 | 2.0 | 922 | 0.6326 | 0.0034 | 0.6399 | | 0.6091 | 3.0 | 1383 | 0.6362 | 0.0034 | 0.6443 | | 0.5843 | 4.0 | 1844 | 0.6422 | 0.0034 | 0.6551 | | 0.5624 | 5.0 | 2305 | 0.6488 | 0.0034 | 0.6464 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
dmitrisaberi/fine-tuned-donut-v3
dmitrisaberi
2024-10-10T05:42:41Z
47
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-10-09T23:21:32Z
--- library_name: transformers license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: fine-tuned-donut-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-donut-v3 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
VishalD1234/Florence-metere1
VishalD1234
2024-10-10T05:33:39Z
103
0
transformers
[ "transformers", "safetensors", "florence2", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-10T05:02:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf
RichardErkhov
2024-10-10T04:42:02Z
85
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-10-10T01:37:48Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mistral-sk-7b - GGUF - Model creator: https://huggingface.co/slovak-nlp/ - Original model: https://huggingface.co/slovak-nlp/mistral-sk-7b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [mistral-sk-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q2_K.gguf) | Q2_K | 2.53GB | | [mistral-sk-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [mistral-sk-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.IQ3_S.gguf) | IQ3_S | 2.96GB | | [mistral-sk-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [mistral-sk-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.IQ3_M.gguf) | IQ3_M | 3.06GB | | [mistral-sk-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q3_K.gguf) | Q3_K | 3.28GB | | [mistral-sk-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [mistral-sk-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [mistral-sk-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [mistral-sk-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q4_0.gguf) | Q4_0 | 3.83GB | | [mistral-sk-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [mistral-sk-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [mistral-sk-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q4_K.gguf) | Q4_K | 4.07GB | | [mistral-sk-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [mistral-sk-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q4_1.gguf) | Q4_1 | 4.24GB | | [mistral-sk-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q5_0.gguf) | Q5_0 | 4.65GB | | [mistral-sk-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [mistral-sk-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q5_K.gguf) | Q5_K | 4.78GB | | [mistral-sk-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [mistral-sk-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q5_1.gguf) | Q5_1 | 5.07GB | | [mistral-sk-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q6_K.gguf) | Q6_K | 5.53GB | | [mistral-sk-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/slovak-nlp_-_mistral-sk-7b-gguf/blob/main/mistral-sk-7b.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- library_name: transformers license: apache-2.0 language: - sk base_model: - mistralai/Mistral-7B-v0.1 --- # Model Card for mistral-sk-7b **mistral-sk-7b** is a Slovak language version of the Mistral-7B-v0.1 large language model with 7 billion parameters. ## Model Details **mistral-sk-7b** is a Slovak language model obtained by full parameter finetuning of the Mistral-7B-v0.1 large language model with the data from the Araneum Slovacum VII Maximum web corpus. The model was developed in collaboration of Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice; Centre of Social and Psychological Sciences of the Slovak Academy of Sciences and Ľ. Štúr Institute of Linguistics, Slovak Academy of Sciences. This is a base pre-trained model that can be used for further finetuning for the downstream tasks in Slovak language. Note that this model does not have any moderation mechanisms. - **Language:** Slovak - **License:** Apache license 2.0 - **Finetuned from model:** Mistral-7B-v0.1 - **Authors:** - Peter Bednár, Department of Cybernetics and Artificial Intelligence, Faculty of Electrical Engineering and Informatics, Technical University of Košice - Marek Dobeš, Centre of Social and Psychological Sciences of the Slovak Academy of Sciences and ČZ o.z. - Radovan Garabík, Ľ. Štúr Institute of Linguistics, Slovak Academy of Sciences, supported by DiusAI a. s. ## Supported by - Part of the Research results was obtained using the high performance computing resources operated by CINECA and awarded within the the National Leonardo access call 2023 by the Centre of Operations, Slovak Academy of Sciences and the Slovak National Supercomputing centre.
QuantFactory/Gemma-2-Ataraxy-v3i-9B-GGUF
QuantFactory
2024-10-10T03:50:59Z
101
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "base_model:ifable/gemma-2-Ifable-9B", "base_model:merge:ifable/gemma-2-Ifable-9B", "base_model:nbeerbower/Gemma2-Gutenberg-Doppel-9B", "base_model:merge:nbeerbower/Gemma2-Gutenberg-Doppel-9B", "base_model:unsloth/gemma-2-9b-it", "base_model:merge:unsloth/gemma-2-9b-it", "base_model:wzhouad/gemma-2-9b-it-WPO-HB", "base_model:merge:wzhouad/gemma-2-9b-it-WPO-HB", "model-index", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-10T02:44:45Z
--- library_name: transformers tags: - mergekit - merge base_model: - nbeerbower/Gemma2-Gutenberg-Doppel-9B - ifable/gemma-2-Ifable-9B - unsloth/gemma-2-9b-it - wzhouad/gemma-2-9b-it-WPO-HB model-index: - name: Gemma-2-Ataraxy-v3i-9B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 42.03 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v3i-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 38.24 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v3i-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 0.15 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v3i-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 10.4 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v3i-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 1.76 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v3i-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 35.18 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v3i-9B name: Open LLM Leaderboard --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Gemma-2-Ataraxy-v3i-9B-GGUF This is quantized version of [lemon07r/Gemma-2-Ataraxy-v3i-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v3i-9B) created using llama.cpp # Original Model Card # Gemma-2-Ataraxy-v3i-9B Another experimental model. This one is in the vein of advanced 2.1, but we replace the simpo model used in the original recipe, with a different simpo model, that was more finetuned with writing in mind, ifable. We also use another writing model, which was trained on gutenberg. We use this one at a higher density because SPPO, on paper is the superior training method, to simpo, and quite frankly, ifable is finicky to work with, and can end up being a little too strong.. or heavy in merges. It's a very strong writer but it introduced quite a bit slop in v2. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## GGUF https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v3i-9B-Q8_0-GGUF ## Merge Details ### Merge Method This model was merged using the della merge method using [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) as a base. ### Models Merged The following models were included in the merge: * [nbeerbower/Gemma2-Gutenberg-Doppel-9B](https://huggingface.co/nbeerbower/Gemma2-Gutenberg-Doppel-9B) * [ifable/gemma-2-Ifable-9B](https://huggingface.co/ifable/gemma-2-Ifable-9B) * [wzhouad/gemma-2-9b-it-WPO-HB](https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: unsloth/gemma-2-9b-it dtype: bfloat16 merge_method: della parameters: epsilon: 0.1 int8_mask: 1.0 lambda: 1.0 normalize: 1.0 slices: - sources: - layer_range: [0, 42] model: unsloth/gemma-2-9b-it - layer_range: [0, 42] model: wzhouad/gemma-2-9b-it-WPO-HB parameters: density: 0.55 weight: 0.6 - layer_range: [0, 42] model: nbeerbower/Gemma2-Gutenberg-Doppel-9B parameters: density: 0.35 weight: 0.6 - layer_range: [0, 42] model: ifable/gemma-2-Ifable-9B parameters: density: 0.25 weight: 0.4 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lemon07r__Gemma-2-Ataraxy-v3i-9B) | Metric |Value| |-------------------|----:| |Avg. |21.29| |IFEval (0-Shot) |42.03| |BBH (3-Shot) |38.24| |MATH Lvl 5 (4-Shot)| 0.15| |GPQA (0-shot) |10.40| |MuSR (0-shot) | 1.76| |MMLU-PRO (5-shot) |35.18|
RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf
RichardErkhov
2024-10-10T03:31:40Z
48
1
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-10T00:42:16Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) saul-7B-merged-3000 - GGUF - Model creator: https://huggingface.co/ruchi012/ - Original model: https://huggingface.co/ruchi012/saul-7B-merged-3000/ | Name | Quant method | Size | | ---- | ---- | ---- | | [saul-7B-merged-3000.Q2_K.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q2_K.gguf) | Q2_K | 2.53GB | | [saul-7B-merged-3000.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [saul-7B-merged-3000.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.IQ3_S.gguf) | IQ3_S | 2.96GB | | [saul-7B-merged-3000.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [saul-7B-merged-3000.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.IQ3_M.gguf) | IQ3_M | 3.06GB | | [saul-7B-merged-3000.Q3_K.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q3_K.gguf) | Q3_K | 3.28GB | | [saul-7B-merged-3000.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [saul-7B-merged-3000.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [saul-7B-merged-3000.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [saul-7B-merged-3000.Q4_0.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q4_0.gguf) | Q4_0 | 3.83GB | | [saul-7B-merged-3000.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [saul-7B-merged-3000.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [saul-7B-merged-3000.Q4_K.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q4_K.gguf) | Q4_K | 4.07GB | | [saul-7B-merged-3000.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [saul-7B-merged-3000.Q4_1.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q4_1.gguf) | Q4_1 | 4.24GB | | [saul-7B-merged-3000.Q5_0.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q5_0.gguf) | Q5_0 | 4.65GB | | [saul-7B-merged-3000.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [saul-7B-merged-3000.Q5_K.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q5_K.gguf) | Q5_K | 4.78GB | | [saul-7B-merged-3000.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [saul-7B-merged-3000.Q5_1.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q5_1.gguf) | Q5_1 | 5.07GB | | [saul-7B-merged-3000.Q6_K.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q6_K.gguf) | Q6_K | 5.53GB | | [saul-7B-merged-3000.Q8_0.gguf](https://huggingface.co/RichardErkhov/ruchi012_-_saul-7B-merged-3000-gguf/blob/main/saul-7B-merged-3000.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
liuganghuggingface/Llamole-Pretrained-GraphEncoder
liuganghuggingface
2024-10-10T03:26:16Z
7
0
null
[ "graph-ml", "arxiv:2410.04223", "license:mit", "region:us" ]
graph-ml
2024-10-07T19:33:23Z
--- license: mit pipeline_tag: graph-ml --- # Pretrained Graph Encoder for Enhanced Molecular Understanding in LLMs This pretrained graph encoder improves molecular understanding in large language models (LLMs), enabling enhanced performance in molecular design tasks. 📄 **Paper**: [Multimodal Large Language Models for Inverse Molecular Design with Retrosynthetic Planning](https://arxiv.org/abs/2410.04223) 📁 **Repository:** https://github.com/liugangcode/Llamole
liuganghuggingface/Llamole-Qwen2-7B-Instruct-Adapter
liuganghuggingface
2024-10-10T03:25:47Z
8
0
peft
[ "peft", "safetensors", "Text-Graph-to-Text", "chemistry", "material science", "molecular design", "graph-ml", "en", "dataset:liuganghuggingface/Llamole-MolQA", "arxiv:2410.04223", "base_model:Qwen/Qwen2-7B-Instruct", "base_model:adapter:Qwen/Qwen2-7B-Instruct", "region:us" ]
graph-ml
2024-10-07T20:51:05Z
--- base_model: Qwen/Qwen2-7B-Instruct tags: - Text-Graph-to-Text - chemistry - material science - molecular design language: - en pipeline_tag: graph-ml library_name: peft datasets: - liuganghuggingface/Llamole-MolQA --- # Model Card for Model ID The adapter fine-tuned for Llamole (Multimodal Large Language Model for Molecular Discovery) ## Model Sources [optional] - **Repository:** https://github.com/liugangcode/Llamole - **Paper:** [Multimodal Large Language Models for Inverse Molecular Design with Retrosynthetic Planning](https://arxiv.org/abs/2410.04223) - **Demo:** Coming soon ## Training Details Coming soon <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
zyusc/meta-llama-Meta-Llama-3.1-8B-Instruct-alpaca-english-similarity-structure-top337-humaneval
zyusc
2024-10-10T03:22:46Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-10T03:19:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ngwgsang/bartpho-syllable-large-visp-s3
ngwgsang
2024-10-10T03:22:45Z
105
0
transformers
[ "transformers", "safetensors", "mbart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-10T03:21:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jfanx86/git-base-pokemon
jfanx86
2024-10-10T03:08:39Z
62
0
transformers
[ "transformers", "tensorboard", "safetensors", "git", "image-text-to-text", "generated_from_trainer", "base_model:microsoft/git-base", "base_model:finetune:microsoft/git-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-10-10T02:47:05Z
--- library_name: transformers license: mit base_model: microsoft/git-base tags: - generated_from_trainer model-index: - name: git-base-pokemon results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-pokemon This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0391 - Wer Score: 2.3433 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | |:-------------:|:------:|:----:|:---------------:|:---------:| | No log | 2.0833 | 25 | 0.0365 | 3.3472 | | 0.0022 | 4.1667 | 50 | 0.0383 | 2.3199 | | 0.0022 | 6.25 | 75 | 0.0384 | 2.8114 | | 0.0005 | 8.3333 | 100 | 0.0391 | 2.3433 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
Wonder-Griffin/judge-xl-model
Wonder-Griffin
2024-10-10T02:40:43Z
54
0
transformers
[ "transformers", "safetensors", "judge-xl", "text-generation-inference", "text-generation", "conversational", "en", "dataset:fka/awesome-chatgpt-prompts", "dataset:BAAI/Infinity-Instruct", "dataset:THUDM/LongWriter-6k", "dataset:SkunkworksAI/reasoning-0.01", "dataset:wikimedia/wikipedia", "dataset:Salesforce/wikitext", "arxiv:1910.09700", "base_model:Wonder-Griffin/Judge-GPT2", "base_model:finetune:Wonder-Griffin/Judge-GPT2", "license:wtfpl", "endpoints_compatible", "region:us" ]
text-generation
2024-09-20T05:01:52Z
--- base_model: - Wonder-Griffin/XL-Judge-LLM - Wonder-Griffin/Judge-GPT2 datasets: - fka/awesome-chatgpt-prompts - BAAI/Infinity-Instruct - THUDM/LongWriter-6k - SkunkworksAI/reasoning-0.01 - wikimedia/wikipedia - Salesforce/wikitext language: - en library_name: transformers license: wtfpl metrics: - f1 - accuracy - perplexity - precision tags: - text-generation-inference inference: true pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
crystantine/STRWRZ
crystantine
2024-10-10T02:39:19Z
7
1
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-10T02:29:59Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora widget: - text: >- In a futuristic urban landscape, a stunning building emerges. This striking edifice features sleek, flowing lines that mimic the curves of nature, with its exterior composed of shimmering materials that catch the sunlight, reflecting a spectrum of colors that shift throughout the day. The windows are oversized and seamlessly integrated, offering panoramic views of the bustling city below and inviting natural light to bathe the interior in a warm glow. Surrounding the structure, a meticulously landscaped park showcases vibrant greenery, with meticulously arranged plant beds boasting colorful flowers and native shrubs that add a splash of life to the setting. The sound of water gently bubbling from modern fountains creates a serene atmosphere, inviting passersby to linger. As pedestrians wander along the adjacent pathways made of smooth, polished stones, they admire the elegant design that blends art and functionality. Inside, the open-plan layout amplifies the feeling of space, with high ceilings adorned with artistic light fixtures that resemble organic forms. The air is faintly scented with fresh flora from the indoor gardens, and the soft hum of conversation mingles with the gentle whir of technology seamlessly integrated into the environment. It's a harmonious blend of nature and innovation, embodying a visionary future where architecture coexists beautifully with its surroundings. In the architecture style of STRWRZ. output: url: samples/1728511374526__000008000_0.jpg - text: >- In front of a huge mansion, a striking young woman stands with poise, captivating in her beauty. Her long, flowing hair cascades in gentle waves, shimmering with hues of deep chestnut and sun-kissed highlights. She wears a delicate, fitted blouse adorned with intricate lace details that hug her graceful figure, while a pastel skirt flares slightly at the waist, exuding a sense of femininity and charm. Her expressive hazel eyes are framed by long, dark lashes and sparkle with a hint of mischief, drawing the viewer into her enchanting gaze. A subtle smile dances on her lips, hinting at warmth and charisma, as she reaches out slightly, her fingers gently brushing against her cheek, accentuating her flawless complexion. In the architecture style of STRWRZ. output: url: samples/1728511404319__000008000_1.jpg - text: >- In a luxurious, elegantly designed cat room adorned with plush, oversized furniture, two playful Maine Coon cats intertwine joyfully. The room is flooded with soft, natural light streaming through expansive windows draped with delicate sheer curtains, casting gentle patterns on the polished hardwood floor. One cat, with a stunning tabby coat and striking green eyes, pounces energetically on a whimsical feather toy, its long, tufted ears perked up in keen anticipation. The other, a majestic cream-colored Maine Coon, lounges on a plush velvet cushion, its bushy tail swishing curiously as it watches its companion's spirited antics. Surrounding them, the room is tastefully decorated with colorful cat trees that reach toward the ceiling, accented with dangling strings and sisal-covered scratching posts. The air feels warm and inviting, with a faint scent of catnip lingering playfully. As they frolic, their soft, melodious purring fills the space, mingling with the subtle rustle of a nearby tuneful wind chime that clinks gently in the breeze. In one corner, a cozy nook with a sunlit window invites quiet moments, yet today, the joy of their playful chase transforms the sophisticated space into a lively playground filled with love and laughter. In the architecture style of STRWRZ. output: url: samples/1728511434321__000008000_2.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: In the architecture style of STRWRZ license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md pipeline_tag: text-to-image --- # strwrz Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words You should use `In the architecture style of STRWRZ` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/None/tree/main) them in the Files & versions tab.
NikolayKozloff/qwen2.5-7b-ins-v3-Q8_0-GGUF
NikolayKozloff
2024-10-10T02:38:34Z
8
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:happzy2633/qwen2.5-7b-ins-v3", "base_model:quantized:happzy2633/qwen2.5-7b-ins-v3", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-10T02:38:00Z
--- base_model: happzy2633/qwen2.5-7b-ins-v3 license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # NikolayKozloff/qwen2.5-7b-ins-v3-Q8_0-GGUF This model was converted to GGUF format from [`happzy2633/qwen2.5-7b-ins-v3`](https://huggingface.co/happzy2633/qwen2.5-7b-ins-v3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/happzy2633/qwen2.5-7b-ins-v3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/qwen2.5-7b-ins-v3-Q8_0-GGUF --hf-file qwen2.5-7b-ins-v3-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/qwen2.5-7b-ins-v3-Q8_0-GGUF --hf-file qwen2.5-7b-ins-v3-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/qwen2.5-7b-ins-v3-Q8_0-GGUF --hf-file qwen2.5-7b-ins-v3-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/qwen2.5-7b-ins-v3-Q8_0-GGUF --hf-file qwen2.5-7b-ins-v3-q8_0.gguf -c 2048 ```
Sovego/clip_vit_base_32_make_model
Sovego
2024-10-10T02:35:48Z
103
0
transformers
[ "transformers", "pytorch", "safetensors", "clip", "zero-shot-image-classification", "en", "base_model:openai/clip-vit-base-patch32", "base_model:finetune:openai/clip-vit-base-patch32", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
2024-10-03T04:44:59Z
--- base_model: - openai/clip-vit-base-patch32 language: - en library_name: transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** IIR-NSU - **Model type:** CLIP - **Language(s) (NLP):** English - **License:** MIT
RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf
RichardErkhov
2024-10-10T02:21:56Z
8
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-09T23:20:13Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) LLama-3-8b-Python - GGUF - Model creator: https://huggingface.co/Kukedlc/ - Original model: https://huggingface.co/Kukedlc/LLama-3-8b-Python/ | Name | Quant method | Size | | ---- | ---- | ---- | | [LLama-3-8b-Python.Q2_K.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q2_K.gguf) | Q2_K | 2.96GB | | [LLama-3-8b-Python.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [LLama-3-8b-Python.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.IQ3_S.gguf) | IQ3_S | 3.43GB | | [LLama-3-8b-Python.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [LLama-3-8b-Python.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.IQ3_M.gguf) | IQ3_M | 3.52GB | | [LLama-3-8b-Python.Q3_K.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q3_K.gguf) | Q3_K | 3.74GB | | [LLama-3-8b-Python.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [LLama-3-8b-Python.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [LLama-3-8b-Python.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [LLama-3-8b-Python.Q4_0.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q4_0.gguf) | Q4_0 | 4.34GB | | [LLama-3-8b-Python.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [LLama-3-8b-Python.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [LLama-3-8b-Python.Q4_K.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q4_K.gguf) | Q4_K | 4.58GB | | [LLama-3-8b-Python.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [LLama-3-8b-Python.Q4_1.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q4_1.gguf) | Q4_1 | 4.78GB | | [LLama-3-8b-Python.Q5_0.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q5_0.gguf) | Q5_0 | 5.21GB | | [LLama-3-8b-Python.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [LLama-3-8b-Python.Q5_K.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q5_K.gguf) | Q5_K | 5.34GB | | [LLama-3-8b-Python.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [LLama-3-8b-Python.Q5_1.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q5_1.gguf) | Q5_1 | 5.65GB | | [LLama-3-8b-Python.Q6_K.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q6_K.gguf) | Q6_K | 6.14GB | | [LLama-3-8b-Python.Q8_0.gguf](https://huggingface.co/RichardErkhov/Kukedlc_-_LLama-3-8b-Python-gguf/blob/main/LLama-3-8b-Python.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: other --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/KNU2JjsNRXyprTdtU4kWx.png)
semo720/lora-sdxl-painting
semo720
2024-10-10T02:19:20Z
6
2
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-10-08T04:50:07Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a doll in szn style' output: url: "image_0.png" - text: 'a doll in szn style' output: url: "image_1.png" - text: 'a doll in szn style' output: url: "image_2.png" - text: 'a doll in szn style' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a shirt of in szn style license: openrail++ --- # SDXL LoRA DreamBooth - semo720/lora-sdxl-painting <Gallery /> ## Model description These are semo720/lora-sdxl-painting LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use a shirt of in szn style to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](semo720/lora-sdxl-painting/tree/main) them in the Files & versions tab.
ymcki/gemma-2-2b-jpn-it-GGUF
ymcki
2024-10-10T02:00:15Z
78
1
transformers
[ "transformers", "gguf", "nlp", "code", "text-generation", "multilingual", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "base_model:google/gemma-2-2b-jpn-it", "base_model:quantized:google/gemma-2-2b-jpn-it", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-10-03T11:20:04Z
--- base_model: google/gemma-2-2b-jpn-it language: - multilingual datasets: - TFMC/imatrix-dataset-for-japanese-llm library_name: transformers license: gemma license_link: https://ai.google.dev/gemma/terms pipeline_tag: text-generation tags: - nlp - code quantized_by: ymcki widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- Original model: https://huggingface.co/google/gemma-2-2b-jpn-it ## Prompt format ``` <start_of_turn>user {prompt}<end_of_turn> <start_of_turn>model <end_of_turn> <start_of_turn>model ``` Note that this model does not support a System prompt. ## Download a file (not the whole branch) from below: ELIZA-Tasks-100 is pretty standard benchmark for Japanese LLMs. The perfect score is 5.00. As a reference, bartowski's gemma-2-27b-it.Q6_K.gguf scores 4.04. | Filename | Quant type | File Size | ELIZA-Tasks-100 | Nvidia 3090 | Description | | -------- | ---------- | --------- | --------------- | ----------- | ----------- | | [gemma-2-2b-jpn-it.f16.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.f16.gguf) | f16 | 5.24GB | 2.90 | 98t/s | Full F16 weights. | | [gemma-2-2b-jpn-it.Q8_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q8_0.gguf) | Q8_0 | 2.78GB | 3.06 | 140t/s | Extremely high quality, *recommended*. | | [gemma-2-2b-jpn-it-imatrix.Q4_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0.gguf) | Q4_0 | 1.63GB | 2.89 | 137t/s | Good quality, *recommended for edge devices <8GB RAM*. | | [gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf) | Q4_0_8_8 | 1.63GB | 2.78 | 2.79t/s | Good quality, *recommended for edge devices <8GB RAM*. | | [gemma-2-2b-jpn-it-imatrix.Q4_0_4_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_4_8.gguf) | Q4_0_4_8 | 1.63GB | 2.77 | 2.61t/s | Good quality, *recommended for edge devices <8GB RAM*. | | [gemma-2-2b-jpn-it-imatrix.Q4_0_4_4.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it-imatrix.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.63GB | 2.65 | 3.09t/s | Good quality, *recommended for edge devices <8GB RAM*. | | [gemma-2-2b-jpn-it.Q4_0.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0.gguf) | Q4_0 | 1.63GB | 2.77 | 159t/s | Good quality, *recommended for edge devices <8GB RAM* | | [gemma-2-2b-jpn-it.Q4_0_8_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_8_8.gguf) | Q4_0_8_8 | 1.63GB | 2.92 | 2.85t/s | Good quality, *recommended for edge devices <8GB RAM* | | [gemma-2-2b-jpn-it.Q4_0_4_8.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_4_8.gguf) | Q4_0_4_8 | 1.63GB | 2.74 | 2.56t/s | Good quality, *recommended for edge devices <8GB RAM* | | [gemma-2-2b-jpn-it.Q4_0_4_4.gguf](https://huggingface.co/ymcki/gemma-2-2b-jpn-it-GGUF/blob/main/gemma-2-2b-jpn-it.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.63GB | 2.70 | 3.10t/s | Good quality, *recommended for edge devices <8GB RAM*. | ## How to check i8mm and sve support for ARM devices ARM i8mm support is necessary to take advantage of Q4_0_4_8 gguf. All ARM architecture >= ARMv8.6-A supports i8mm. ARM sve support is necessary to take advantage of Q4_0_8_8 gguf. sve is an optional feature that starts from ARMv8.2-A but majority of ARM chips doesn't implement it. For ARM devices without both, it is recommended to use Q4_0_4_4. With these support, the inference speed should be faster in the order of Q4_0_8_8 > Q4_0_4_8 > Q4_0_4_4 > Q4_0 without much effect on the quality of response. This is a [list](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) of ARM CPUs that support different ARM instructions. Another [list](https://raw.githubusercontent.com/ThomasKaiser/sbc-bench/refs/heads/master/sbc-bench.sh). Apparently, they only covers limited number of ARM CPUs. It is better you check for i8mm and sve support by yourself. For Apple devices, ``` sysctl hw ``` For other ARM devices (ie most Android devices), ``` cat /proc/cpuinfo ``` There are also android apps that can display /proc/cpuinfo. I was told that for Intel/AMD CPU inference, support for AVX2/AVX512 can also improve the performance of Q4_0_8_8. On the other hand, Nvidia 3090 inference speed is significantly faster for Q4_0 than the other ggufs. That means for GPU inference, you better off using Q4_0. ## Which Q4_0 model to use for ARM devices | Brand | Series | Model | i8mm | sve | Quant Type | | ----- | ------ | ----- | ---- | --- | -----------| | Apple | A | A4 to A14 | No | No | Q4_0_4_4 | | Apple | A | A15 to A18 | Yes | No | Q4_0_4_8 | | Apple | M | M1 | No | No | Q4_0_4_4 | | Apple | M | M2/M3/M4 | Yes | No | Q4_0_4_8 | | Google | Tensor | G1,G2 | No | No | Q4_0_4_4 | | Google | Tensor | G3,G4 | Yes | Yes | Q4_0_8_8 | | Samsung | Exynos | 2200,2400 | Yes | Yes | Q4_0_8_8 | | Mediatek | Dimensity | 9000,9000+ | Yes | Yes | Q4_0_8_8 | | Mediatek | Dimensity | 9300 | Yes | No | Q4_0_4_8 | | Qualcomm | Snapdragon | 7+ Gen 2,8/8+ Gen 1 | Yes | Yes | Q4_0_8_8 | | Qualcomm | Snapdragon | 8 Gen 2,8 Gen 3,X Elite | Yes | No | Q4_0_4_8 | ## imatrix quantization According to this [blog](https://sc-bakushu.hatenablog.com/entry/2024/04/20/050213), adding imatrix to low bit quant can significantly improve performance. The best dataset for Japanese is [MTFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm). Therefore, I also created the imatrix versions of different Q4_0 quants. However, based on my benchmarking results, the difference is not significant. ## Convert safetensors to f16 gguf Make sure you have llama.cpp git cloned: ``` python3 convert_hf_to_gguf.py gemma-2-2b-jpn-it/ --outfile gemma-2-2b-jpn-it.f16.gguf --outtype f16 ``` ## Convert f16 gguf to Q8_0 gguf without imatrix Make sure you have llama.cpp compiled: ``` ./llama-quantize gemma-2-2b-jpn-it.f16.gguf gemma-2-2b-jpn-it.Q8_0.gguf q8_0 ``` ## Convert f16 gguf to other ggufs with imatrix First, prepare imatrix from f16 gguf and c4_en_ja_imatrix.txt ``` ./llama-imatrix -m gemma-2-2b-jpn-it.f16.gguf -f c4_en_ja_imatrix.txt -o gemma-2-2b-jpn-it.imatrix --chunks 32 ``` Then, convert f16 gguf with imatrix to create imatrix gguf ``` ./llama-quantize --imatrix gemma-2-2b-jpn-it.imatrix gemma-2-2b-jpn-it.f16.gguf gemma-2-2b-jpn-it-imatrix.Q4_0_8_8.gguf q4_0_8_8 ``` ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download ymcki/gemma-2-2b-jpn-it-GGUF --include "gemma-2-2b-jpn-it-Q8_0.gguf" --local-dir ./ ``` ## Credits Thank you bartowski for providing a README.md to get me started. Thank you YoutechA320U for the ELYZA-tasks-100 auto evaluation tool.
mav23/gemma-2-Ifable-9B-GGUF
mav23
2024-10-10T01:58:49Z
96
0
transformers
[ "transformers", "gguf", "dataset:jondurbin/gutenberg-dpo-v0.1", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-10T00:51:13Z
--- license: gemma library_name: transformers datasets: - jondurbin/gutenberg-dpo-v0.1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ifable/gemma-2-Ifable-9B This model ranked first on the Creative Writing Benchmark (https://eqbench.com/creative_writing.html) on September 10, 2024 ## Training and evaluation data - Gutenberg: https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1 - Carefully curated proprietary creative writing dataset ## Training procedure Training method: SimPO (GitHub - princeton-nlp/SimPO: SimPO: Simple Preference Optimization with a Reference-Free Reward) It achieves the following results on the evaluation set: - Loss: 1.0163 - Rewards/chosen: -21.6822 - Rewards/rejected: -47.8754 - Rewards/accuracies: 0.9167 - Rewards/margins: 26.1931 - Logps/rejected: -4.7875 - Logps/chosen: -2.1682 - Logits/rejected: -17.0475 - Logits/chosen: -12.0041 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-07 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Sft Loss | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:| | 1.4444 | 0.9807 | 35 | 1.0163 | -21.6822 | -47.8754 | 0.9167 | 26.1931 | -4.7875 | -2.1682 | -17.0475 | -12.0041 | 0.0184 | ### Framework versions - Transformers 4.43.4 - Pytorch 2.3.0a0+ebedce2 - Datasets 2.20.0 - Tokenizers 0.19.1 We are looking for product manager and operations managers to build applications through our model, and also open for business cooperation, and also AI engineer to join us, contact with : [email protected]
ikmalsaid/anya-taylor-joy-lora
ikmalsaid
2024-10-10T01:57:54Z
28
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
2024-10-10T01:57:33Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: t4ylor_joy <lora:t4ylor-joy-lora:1> output: url: images/00068-2120938777.png - text: t4ylor_joy <lora:t4ylor-joy-lora:1> output: url: images/00038-2392774582.png - text: t4ylor_joy <lora:t4ylor-joy-lora:1> output: url: images/00030-801769514.png - text: t4ylor_joy <lora:t4ylor-joy-lora:1> output: url: images/00071-3849737687.png - text: t4ylor_joy <lora:t4ylor-joy-lora:1> output: url: images/00081-4074108117.png - text: t4ylor_joy <lora:t4ylor-joy-lora:1> output: url: images/00088-744899702.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: t4ylor_joy --- # Anya Taylor Joy - Flux.1 [Dev] LORA <Gallery /> ## Model description A popular American actress LORA trained on Flux.1 [Dev] (Dev2Pro version). Any constructive feedback and suggestions are very much appreciated. Thank you for your support! ## Trigger words You should use `t4ylor_joy` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/ikmalsaid/anya-taylor-joy-lora/tree/main) them in the Files & versions tab.
Rich-J/subnet29_C0_Oct09_0
Rich-J
2024-10-10T01:41:23Z
63
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-10T01:38:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jamesohe/cas-llama3-casaudit-8b-v2
jamesohe
2024-10-10T01:38:20Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-10T01:31:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
katanemo/bart-large-mnli
katanemo
2024-10-10T01:36:27Z
27
0
null
[ "onnx", "safetensors", "bart", "zero-shot-classification", "dataset:multi_nli", "arxiv:1910.13461", "arxiv:1909.00161", "license:mit", "region:us" ]
zero-shot-classification
2024-10-10T01:34:51Z
--- license: mit thumbnail: https://huggingface.co/front/thumbnails/facebook.png pipeline_tag: zero-shot-classification datasets: - multi_nli --- # bart-large-mnli This is the checkpoint for [bart-large](https://huggingface.co/facebook/bart-large) after being trained on the [MultiNLI (MNLI)](https://huggingface.co/datasets/multi_nli) dataset. Additional information about this model: - The [bart-large](https://huggingface.co/facebook/bart-large) model page - [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension ](https://arxiv.org/abs/1910.13461) - [BART fairseq implementation](https://github.com/pytorch/fairseq/tree/master/fairseq/models/bart) ## NLI-based Zero Shot Text Classification [Yin et al.](https://arxiv.org/abs/1909.00161) proposed a method for using pre-trained NLI models as a ready-made zero-shot sequence classifiers. The method works by posing the sequence to be classified as the NLI premise and to construct a hypothesis from each candidate label. For example, if we want to evaluate whether a sequence belongs to the class "politics", we could construct a hypothesis of `This text is about politics.`. The probabilities for entailment and contradiction are then converted to label probabilities. This method is surprisingly effective in many cases, particularly when used with larger pre-trained models like BART and Roberta. See [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html) for a more expansive introduction to this and other zero shot methods, and see the code snippets below for examples of using this model for zero-shot classification both with Hugging Face's built-in pipeline and with native Transformers/PyTorch code. #### With the zero-shot classification pipeline The model can be loaded with the `zero-shot-classification` pipeline like so: ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli") ``` You can then use this pipeline to classify sequences into any of the class names you specify. ```python sequence_to_classify = "one day I will see the world" candidate_labels = ['travel', 'cooking', 'dancing'] classifier(sequence_to_classify, candidate_labels) #{'labels': ['travel', 'dancing', 'cooking'], # 'scores': [0.9938651323318481, 0.0032737774308770895, 0.002861034357920289], # 'sequence': 'one day I will see the world'} ``` If more than one candidate label can be correct, pass `multi_label=True` to calculate each class independently: ```python candidate_labels = ['travel', 'cooking', 'dancing', 'exploration'] classifier(sequence_to_classify, candidate_labels, multi_label=True) #{'labels': ['travel', 'exploration', 'dancing', 'cooking'], # 'scores': [0.9945111274719238, # 0.9383890628814697, # 0.0057061901316046715, # 0.0018193122232332826], # 'sequence': 'one day I will see the world'} ``` #### With manual PyTorch ```python # pose sequence as a NLI premise and label as a hypothesis from transformers import AutoModelForSequenceClassification, AutoTokenizer nli_model = AutoModelForSequenceClassification.from_pretrained('facebook/bart-large-mnli') tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli') premise = sequence hypothesis = f'This example is {label}.' # run through model pre-trained on MNLI x = tokenizer.encode(premise, hypothesis, return_tensors='pt', truncation_strategy='only_first') logits = nli_model(x.to(device))[0] # we throw away "neutral" (dim 1) and take the probability of # "entailment" (2) as the probability of the label being true entail_contradiction_logits = logits[:,[0,2]] probs = entail_contradiction_logits.softmax(dim=1) prob_label_is_true = probs[:,1] ```
bartowski/qwen2.5-7b-ins-v3-GGUF
bartowski
2024-10-10T01:31:43Z
1,037
9
null
[ "gguf", "text-generation", "base_model:happzy2633/qwen2.5-7b-ins-v3", "base_model:quantized:happzy2633/qwen2.5-7b-ins-v3", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-10-10T01:10:14Z
--- base_model: happzy2633/qwen2.5-7b-ins-v3 pipeline_tag: text-generation quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of qwen2.5-7b-ins-v3 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3901">b3901</a> for quantization. Original model: https://huggingface.co/happzy2633/qwen2.5-7b-ins-v3 All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [qwen2.5-7b-ins-v3-f16.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-f16.gguf) | f16 | 15.24GB | false | Full F16 weights. | | [qwen2.5-7b-ins-v3-Q8_0.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q8_0.gguf) | Q8_0 | 8.10GB | false | Extremely high quality, generally unneeded but max available quant. | | [qwen2.5-7b-ins-v3-Q6_K_L.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q6_K_L.gguf) | Q6_K_L | 6.52GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. | | [qwen2.5-7b-ins-v3-Q6_K.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q6_K.gguf) | Q6_K | 6.25GB | false | Very high quality, near perfect, *recommended*. | | [qwen2.5-7b-ins-v3-Q5_K_L.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q5_K_L.gguf) | Q5_K_L | 5.78GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. | | [qwen2.5-7b-ins-v3-Q5_K_M.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q5_K_M.gguf) | Q5_K_M | 5.44GB | false | High quality, *recommended*. | | [qwen2.5-7b-ins-v3-Q5_K_S.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q5_K_S.gguf) | Q5_K_S | 5.32GB | false | High quality, *recommended*. | | [qwen2.5-7b-ins-v3-Q4_K_L.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q4_K_L.gguf) | Q4_K_L | 5.09GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. | | [qwen2.5-7b-ins-v3-Q4_K_M.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q4_K_M.gguf) | Q4_K_M | 4.68GB | false | Good quality, default size for must use cases, *recommended*. | | [qwen2.5-7b-ins-v3-Q3_K_XL.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q3_K_XL.gguf) | Q3_K_XL | 4.57GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [qwen2.5-7b-ins-v3-Q4_K_S.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q4_K_S.gguf) | Q4_K_S | 4.46GB | false | Slightly lower quality with more space savings, *recommended*. | | [qwen2.5-7b-ins-v3-Q4_0.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q4_0.gguf) | Q4_0 | 4.44GB | false | Legacy format, generally not worth using over similarly sized formats | | [qwen2.5-7b-ins-v3-Q4_0_8_8.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q4_0_8_8.gguf) | Q4_0_8_8 | 4.43GB | false | Optimized for ARM inference. Requires 'sve' support (see link below). *Don't use on Mac or Windows*. | | [qwen2.5-7b-ins-v3-Q4_0_4_8.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q4_0_4_8.gguf) | Q4_0_4_8 | 4.43GB | false | Optimized for ARM inference. Requires 'i8mm' support (see link below). *Don't use on Mac or Windows*. | | [qwen2.5-7b-ins-v3-Q4_0_4_4.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q4_0_4_4.gguf) | Q4_0_4_4 | 4.43GB | false | Optimized for ARM inference. Should work well on all ARM chips, pick this if you're unsure. *Don't use on Mac or Windows*. | | [qwen2.5-7b-ins-v3-IQ4_XS.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-IQ4_XS.gguf) | IQ4_XS | 4.22GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [qwen2.5-7b-ins-v3-Q3_K_L.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q3_K_L.gguf) | Q3_K_L | 4.09GB | false | Lower quality but usable, good for low RAM availability. | | [qwen2.5-7b-ins-v3-Q3_K_M.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q3_K_M.gguf) | Q3_K_M | 3.81GB | false | Low quality. | | [qwen2.5-7b-ins-v3-IQ3_M.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-IQ3_M.gguf) | IQ3_M | 3.57GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [qwen2.5-7b-ins-v3-Q2_K_L.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q2_K_L.gguf) | Q2_K_L | 3.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [qwen2.5-7b-ins-v3-Q3_K_S.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q3_K_S.gguf) | Q3_K_S | 3.49GB | false | Low quality, not recommended. | | [qwen2.5-7b-ins-v3-IQ3_XS.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-IQ3_XS.gguf) | IQ3_XS | 3.35GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [qwen2.5-7b-ins-v3-Q2_K.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-Q2_K.gguf) | Q2_K | 3.02GB | false | Very low quality but surprisingly usable. | | [qwen2.5-7b-ins-v3-IQ2_M.gguf](https://huggingface.co/bartowski/qwen2.5-7b-ins-v3-GGUF/blob/main/qwen2.5-7b-ins-v3-IQ2_M.gguf) | IQ2_M | 2.78GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/qwen2.5-7b-ins-v3-GGUF --include "qwen2.5-7b-ins-v3-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/qwen2.5-7b-ins-v3-GGUF --include "qwen2.5-7b-ins-v3-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (qwen2.5-7b-ins-v3-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
GUJO1/model_output
GUJO1
2024-10-10T01:20:09Z
107
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "PKJ", "10class", "multilabel", "generated_from_trainer", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-10T01:19:17Z
--- library_name: transformers license: apache-2.0 base_model: beomi/kcbert-base tags: - PKJ - 10class - multilabel - text-classification - generated_from_trainer model-index: - name: model_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the unsmile_data dataset. It achieves the following results on the evaluation set: - Loss: 0.1295 - Lrap: 0.8834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Lrap | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 235 | 0.1485 | 0.8540 | | No log | 2.0 | 470 | 0.1287 | 0.8714 | | 0.1729 | 3.0 | 705 | 0.1229 | 0.8813 | | 0.1729 | 4.0 | 940 | 0.1284 | 0.8834 | | 0.0784 | 5.0 | 1175 | 0.1295 | 0.8834 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
MadeAgents/Hammer-1.5b
MadeAgents
2024-10-10T01:14:29Z
38
3
null
[ "safetensors", "qwen2", "dataset:Salesforce/xlam-function-calling-60k", "dataset:MadeAgents/xlam-irrelevance-7.5k", "arxiv:2410.04587", "base_model:Qwen/Qwen2-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2-1.5B-Instruct", "license:cc-by-4.0", "region:us" ]
null
2024-09-11T11:06:40Z
--- license: cc-by-4.0 datasets: - Salesforce/xlam-function-calling-60k - MadeAgents/xlam-irrelevance-7.5k base_model: Qwen/Qwen2-1.5B-Instruct --- # Hammer-1.5b Function Calling Model ## <font color=red>\[Updates!!!\]</font> Hammer 2.0 Series have been Published We're excited to release lightweight Hammer 2.0 models ([0.5B](https://huggingface.co/MadeAgents/Hammer2.0-0.5b) , [1.5B](https://huggingface.co/MadeAgents/Hammer2.0-1.5b) , [3B](https://huggingface.co/MadeAgents/Hammer2.0-3b) , and [7B](https://huggingface.co/MadeAgents/Hammer2.0-7b)) with strong function calling capability, which empower developers to build personalized, on-device agentic applications. ## Introduction **Hammer** is a series of cutting-edge Large Language Models (LLMs) crafted to boost the critical capability of AI agents: function calling. Differing from existing models focusing on training data refinement, Hammer optimizes performance primarily through advanced training techniques. Focusing on on-device applications, we release a number of models from [1.5B](https://huggingface.co/MadeAgents/Hammer-1.5b), [4B](https://huggingface.co/MadeAgents/Hammer-4b) to [7B](https://huggingface.co/MadeAgents/Hammer-7b) parameters. ## Model Details Hammer finetuned based on [Qwen 2.0 series](https://huggingface.co/collections/Qwen/qwen2-6659360b33528ced941e557f) using function masking techniques. It's trained using the [APIGen Function Calling Datasets](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) containing 60,000 samples, supplemented by [xlam-irrelevance-7.5k](https://huggingface.co/datasets/MadeAgents/xlam-irrelevance-7.5k) we generated. Hammer has achieved exceptional performances across numerous function calling benchmarks. For more details, please refer to [Hammer: Robust Function-Calling for On-Device Language Models via Function Masking](https://arxiv.org/abs/2410.04587) and [Hammer GitHub repository](https://github.com/MadeAgents/Hammer). ## Evaluation First, we evaluate Hammer series on the Berkeley Function-Calling Leaderboard (BFCL-v2): <div style="text-align: center;"> <img src="figures/bfcl.PNG" alt="overview" width="1480" style="margin: auto;"> </div> The above table indicates that within the BFCL framework, our Hammer series consistently achieves corresponding sota performance at comparable scales, particularly Hammer-7B, whose overall performance ranks second only to the proprietary GPT-4. In addition, we evaluated our Hammer series (1.5b, 4b, 7b) on other academic benchmarks to further show our model's generalization ability: <div style="text-align: center;"> <img src="figures/others.PNG" alt="overview" width="1000" style="margin: auto;"> </div> Hammer models showcase highly stable performance, suggesting the robustness of Hammer series. In contrast, the baseline approaches display varying levels of effectiveness. ## Requiements The code of Hammer-7b has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`. ## How to Use This is a simple example of how to use our model. ~~~python import json import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "MadeAgents/Hammer-1.5b" model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_name) # Please use our provided instruction prompt for best performance TASK_INSTRUCTION = """You are a tool calling assistant. In order to complete the user's request, you need to select one or more appropriate tools from the following tools and fill in the correct values for the tool parameters. Your specific tasks are: 1. Make one or more function/tool calls to meet the request based on the question. 2. If none of the function can be used, point it out and refuse to answer. 3. If the given question lacks the parameters required by the function, also point it out. """ FORMAT_INSTRUCTION = """ The output MUST strictly adhere to the following JSON format, and NO other text MUST be included. The example format is as follows. Please make sure the parameter type is correct. If no function call is needed, please directly output an empty list '[]' ``` [ {"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}}, ... (more tool calls as required) ] ``` """ # Define the input query and available tools query = "Where can I find live giveaways for beta access and games? And what's the weather like in New York, US?" live_giveaways_by_type = { "name": "live_giveaways_by_type", "description": "Retrieve live giveaways from the GamerPower API based on the specified type.", "parameters": { "type": "object", "properties": { "type": { "type": "string", "description": "The type of giveaways to retrieve (e.g., game, loot, beta).", "default": "game" } }, "required": ["type"] } } get_current_weather={ "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } get_stock_price={ "name": "get_stock_price", "description": "Retrieves the current stock price for a given ticker symbol. The ticker symbol must be a valid symbol for a publicly traded company on a major US stock exchange like NYSE or NASDAQ. The tool will return the latest trade price in USD. It should be used when the user asks about the current or most recent price of a specific stock. It will not provide any other information about the stock or company.", "parameters": { "type": "object", "properties": { "ticker": { "type": "string", "description": "The stock ticker symbol, e.g. AAPL for Apple Inc." } }, "required": ["ticker"] } } def convert_to_format_tool(tools): '''''' if isinstance(tools, dict): format_tools = { "name": tools["name"], "description": tools["description"], "parameters": tools["parameters"].get("properties", {}), } required = tools["parameters"].get("required", []) for param in required: format_tools["parameters"][param]["required"] = True for param in format_tools["parameters"].keys(): if "default" in format_tools["parameters"][param]: default = format_tools["parameters"][param]["default"] format_tools["parameters"][param]["description"]+=f"default is \'{default}\'" return format_tools elif isinstance(tools, list): return [convert_to_format_tool(tool) for tool in tools] else: return tools # Helper function to build the input prompt for our model def build_prompt(task_instruction: str, format_instruction: str, tools: list, query: str): prompt = f"[BEGIN OF TASK INSTRUCTION]\n{task_instruction}\n[END OF TASK INSTRUCTION]\n\n" prompt += f"[BEGIN OF AVAILABLE TOOLS]\n{json.dumps(tools)}\n[END OF AVAILABLE TOOLS]\n\n" prompt += f"[BEGIN OF FORMAT INSTRUCTION]\n{format_instruction}\n[END OF FORMAT INSTRUCTION]\n\n" prompt += f"[BEGIN OF QUERY]\n{query}\n[END OF QUERY]\n\n" return prompt # Build the input and start the inference openai_format_tools = [live_giveaways_by_type, get_current_weather,get_stock_price] format_tools = convert_to_format_tool(openai_format_tools) content = build_prompt(TASK_INSTRUCTION, FORMAT_INSTRUCTION, format_tools, query) messages=[ { 'role': 'user', 'content': content} ] inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) # tokenizer.eos_token_id is the id of <|EOT|> token outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id) print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)) ~~~
JJeePP/model_output
JJeePP
2024-10-10T01:10:49Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "sjy", "categorical", "multi_label", "10_class", "generated_from_trainer", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-10T01:10:19Z
--- library_name: transformers license: apache-2.0 base_model: beomi/kcbert-base tags: - sjy - categorical - multi_label - 10_class - generated_from_trainer model-index: - name: model_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the unsmile_data dataset. It achieves the following results on the evaluation set: - Loss: 0.1304 - Lrap: 0.8816 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Lrap | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 235 | 0.1485 | 0.8578 | | No log | 2.0 | 470 | 0.1299 | 0.8731 | | 0.1725 | 3.0 | 705 | 0.1234 | 0.8820 | | 0.1725 | 4.0 | 940 | 0.1291 | 0.8804 | | 0.078 | 5.0 | 1175 | 0.1304 | 0.8816 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
hj177/model_output
hj177
2024-10-10T01:08:13Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "Shin", "10class", "multi_labels", "generated_from_trainer", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-10T01:07:48Z
--- library_name: transformers license: apache-2.0 base_model: beomi/kcbert-base tags: - Shin - 10class - multi_labels - generated_from_trainer model-index: - name: model_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the unsmile_data dataset. It achieves the following results on the evaluation set: - Loss: 0.1382 - Lrap: 0.8801 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Lrap | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 469 | 0.1342 | 0.8665 | | 0.1913 | 2.0 | 938 | 0.1260 | 0.8703 | | 0.1051 | 3.0 | 1407 | 0.1237 | 0.8843 | | 0.0708 | 4.0 | 1876 | 0.1332 | 0.8799 | | 0.0481 | 5.0 | 2345 | 0.1382 | 0.8801 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
yeonju91/model_output
yeonju91
2024-10-10T01:07:54Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "CYH", "10class", "muti_labels", "generated_from_trainer", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-10T01:07:25Z
--- library_name: transformers license: apache-2.0 base_model: beomi/kcbert-base tags: - CYH - 10class - muti_labels - generated_from_trainer model-index: - name: model_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the unsmile_data dataset. It achieves the following results on the evaluation set: - Loss: 0.1334 - Lrap: 0.8763 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Lrap | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 235 | 0.1460 | 0.8602 | | No log | 2.0 | 470 | 0.1296 | 0.8728 | | 0.1702 | 3.0 | 705 | 0.1244 | 0.8783 | | 0.1702 | 4.0 | 940 | 0.1312 | 0.8779 | | 0.0765 | 5.0 | 1175 | 0.1334 | 0.8763 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
Kanggo/model_output
Kanggo
2024-10-10T01:04:09Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "hyundo", "categorical", "multi_laebl", "10_class", "generated_from_trainer", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-10T01:03:45Z
--- library_name: transformers license: apache-2.0 base_model: beomi/kcbert-base tags: - hyundo - categorical - multi_laebl - 10_class - generated_from_trainer model-index: - name: model_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the unsmile_data dataset. It achieves the following results on the evaluation set: - Loss: 0.1311 - Lrap: 0.8779 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Lrap | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 235 | 0.1467 | 0.8596 | | No log | 2.0 | 470 | 0.1291 | 0.8725 | | 0.17 | 3.0 | 705 | 0.1251 | 0.8776 | | 0.17 | 4.0 | 940 | 0.1298 | 0.8794 | | 0.0776 | 5.0 | 1175 | 0.1311 | 0.8779 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf
RichardErkhov
2024-10-10T01:01:46Z
8
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-09T22:19:00Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) text-to-sql-finetuned-mistral-7b - GGUF - Model creator: https://huggingface.co/dalau627/ - Original model: https://huggingface.co/dalau627/text-to-sql-finetuned-mistral-7b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [text-to-sql-finetuned-mistral-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q2_K.gguf) | Q2_K | 2.54GB | | [text-to-sql-finetuned-mistral-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.IQ3_XS.gguf) | IQ3_XS | 2.82GB | | [text-to-sql-finetuned-mistral-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.IQ3_S.gguf) | IQ3_S | 2.97GB | | [text-to-sql-finetuned-mistral-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [text-to-sql-finetuned-mistral-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.IQ3_M.gguf) | IQ3_M | 3.06GB | | [text-to-sql-finetuned-mistral-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q3_K.gguf) | Q3_K | 3.28GB | | [text-to-sql-finetuned-mistral-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [text-to-sql-finetuned-mistral-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [text-to-sql-finetuned-mistral-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.IQ4_XS.gguf) | IQ4_XS | 3.68GB | | [text-to-sql-finetuned-mistral-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q4_0.gguf) | Q4_0 | 3.83GB | | [text-to-sql-finetuned-mistral-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [text-to-sql-finetuned-mistral-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [text-to-sql-finetuned-mistral-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q4_K.gguf) | Q4_K | 4.07GB | | [text-to-sql-finetuned-mistral-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [text-to-sql-finetuned-mistral-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q4_1.gguf) | Q4_1 | 4.24GB | | [text-to-sql-finetuned-mistral-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q5_0.gguf) | Q5_0 | 4.66GB | | [text-to-sql-finetuned-mistral-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q5_K_S.gguf) | Q5_K_S | 4.66GB | | [text-to-sql-finetuned-mistral-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q5_K.gguf) | Q5_K | 4.78GB | | [text-to-sql-finetuned-mistral-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [text-to-sql-finetuned-mistral-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q5_1.gguf) | Q5_1 | 5.07GB | | [text-to-sql-finetuned-mistral-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q6_K.gguf) | Q6_K | 5.54GB | | [text-to-sql-finetuned-mistral-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/dalau627_-_text-to-sql-finetuned-mistral-7b-gguf/blob/main/text-to-sql-finetuned-mistral-7b.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zooworld/model_output
zooworld
2024-10-10T00:54:26Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "zoo", "categorical", "multi_label", "10_class", "generated_from_trainer", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-10T00:54:06Z
--- library_name: transformers license: apache-2.0 base_model: beomi/kcbert-base tags: - zoo - categorical - multi_label - 10_class - generated_from_trainer model-index: - name: model_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the unsmile_data dataset. It achieves the following results on the evaluation set: - Loss: 0.1309 - Lrap: 0.8798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Lrap | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 235 | 0.1500 | 0.8543 | | No log | 2.0 | 470 | 0.1304 | 0.8729 | | 0.1746 | 3.0 | 705 | 0.1248 | 0.8791 | | 0.1746 | 4.0 | 940 | 0.1290 | 0.8781 | | 0.0806 | 5.0 | 1175 | 0.1309 | 0.8798 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
Chunwhwan/model_output
Chunwhwan
2024-10-10T00:53:24Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-10T00:52:18Z
--- library_name: transformers license: apache-2.0 base_model: beomi/kcbert-base tags: - generated_from_trainer model-index: - name: model_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1292 - Lrap: 0.8809 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Lrap | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 235 | 0.1498 | 0.8548 | | No log | 2.0 | 470 | 0.1284 | 0.8732 | | 0.1735 | 3.0 | 705 | 0.1235 | 0.8796 | | 0.1735 | 4.0 | 940 | 0.1279 | 0.8795 | | 0.0791 | 5.0 | 1175 | 0.1292 | 0.8809 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
GGami/model_output
GGami
2024-10-10T00:52:22Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "jyp", "categorical", "multi_label", "10_class", "generated_from_trainer", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-10T00:51:33Z
--- library_name: transformers license: apache-2.0 base_model: beomi/kcbert-base tags: - jyp - categorical - multi_label - 10_class - generated_from_trainer model-index: - name: model_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the unsmile_data dataset. It achieves the following results on the evaluation set: - Loss: 0.1296 - Lrap: 0.8809 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Lrap | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 235 | 0.1473 | 0.8546 | | No log | 2.0 | 470 | 0.1285 | 0.8715 | | 0.1687 | 3.0 | 705 | 0.1234 | 0.8793 | | 0.1687 | 4.0 | 940 | 0.1274 | 0.8815 | | 0.0784 | 5.0 | 1175 | 0.1296 | 0.8809 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
eunyoung2/model_output
eunyoung2
2024-10-10T00:51:51Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "ley", "categorical", "multi_label", "10_class", "generated_from_trainer", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-10T00:51:02Z
--- library_name: transformers license: apache-2.0 base_model: beomi/kcbert-base tags: - ley - categorical - multi_label - 10_class - generated_from_trainer model-index: - name: model_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the unsmile_data dataset. It achieves the following results on the evaluation set: - Loss: 0.1294 - Lrap: 0.8838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Lrap | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 235 | 0.1492 | 0.8575 | | No log | 2.0 | 470 | 0.1294 | 0.8728 | | 0.1739 | 3.0 | 705 | 0.1215 | 0.8852 | | 0.1739 | 4.0 | 940 | 0.1282 | 0.8831 | | 0.0781 | 5.0 | 1175 | 0.1294 | 0.8838 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
YU-JEONG/model_output
YU-JEONG
2024-10-10T00:51:46Z
105
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "jyj", "categorical", "multi_label", "10_class", "generated_from_trainer", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-10T00:51:03Z
--- library_name: transformers license: apache-2.0 base_model: beomi/kcbert-base tags: - jyj - categorical - multi_label - 10_class - generated_from_trainer model-index: - name: model_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the unsmile_data dataset. It achieves the following results on the evaluation set: - Loss: 0.1306 - Lrap: 0.8789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Lrap | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 235 | 0.1493 | 0.8556 | | No log | 2.0 | 470 | 0.1300 | 0.8716 | | 0.1738 | 3.0 | 705 | 0.1234 | 0.8794 | | 0.1738 | 4.0 | 940 | 0.1291 | 0.8772 | | 0.0786 | 5.0 | 1175 | 0.1306 | 0.8789 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
satosawa/Qwen-Qwen1.5-1.8B-1728521439
satosawa
2024-10-10T00:51:07Z
6
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
2024-10-10T00:50:39Z
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
NayeongKi/model_output
NayeongKi
2024-10-10T00:46:58Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "KNY", "10class", "multi_labels", "generated_from_trainer", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-10T00:46:28Z
--- library_name: transformers license: apache-2.0 base_model: beomi/kcbert-base tags: - KNY - 10class - multi_labels - generated_from_trainer model-index: - name: model_output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_output This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the unsmild_data dataset. It achieves the following results on the evaluation set: - Loss: 0.1291 - Lrap: 0.8815 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Lrap | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 235 | 0.1507 | 0.8516 | | No log | 2.0 | 470 | 0.1294 | 0.8718 | | 0.1742 | 3.0 | 705 | 0.1231 | 0.8802 | | 0.1742 | 4.0 | 940 | 0.1274 | 0.8801 | | 0.0804 | 5.0 | 1175 | 0.1291 | 0.8815 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.19.1
harsha19/vid
harsha19
2024-10-10T00:37:04Z
11
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-22T15:16:25Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: rups --- # Rupss <!-- <Gallery /> --> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `rups` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('harshasai-dev/rupss', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf
RichardErkhov
2024-10-10T00:28:49Z
6
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-09T21:30:12Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0 - GGUF - Model creator: https://huggingface.co/braindao/ - Original model: https://huggingface.co/braindao/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q2_K.gguf) | Q2_K | 2.81GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.IQ3_XS.gguf) | IQ3_XS | 3.12GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.IQ3_S.gguf) | IQ3_S | 3.26GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q3_K_S.gguf) | Q3_K_S | 3.25GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.IQ3_M.gguf) | IQ3_M | 3.33GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q3_K.gguf) | Q3_K | 3.55GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q3_K_M.gguf) | Q3_K_M | 3.55GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q3_K_L.gguf) | Q3_K_L | 3.81GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.IQ4_XS.gguf) | IQ4_XS | 3.96GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q4_0.gguf) | Q4_0 | 4.13GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.IQ4_NL.gguf) | IQ4_NL | 4.16GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q4_K_S.gguf) | Q4_K_S | 4.15GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q4_K.gguf) | Q4_K | 4.36GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q4_K_M.gguf) | Q4_K_M | 4.36GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q4_1.gguf) | Q4_1 | 4.54GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q5_0.gguf) | Q5_0 | 4.95GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q5_K_S.gguf) | Q5_K_S | 4.95GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q5_K.gguf) | Q5_K | 5.07GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q5_K_M.gguf) | Q5_K_M | 5.07GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q5_1.gguf) | Q5_1 | 5.36GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q6_K.gguf) | Q6_K | 5.82GB | | [iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/braindao_-_iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0-gguf/blob/main/iq-code-evmind-qwen-2.5-7b-instruct-v0.2410.0.Q8_0.gguf) | Q8_0 | 7.54GB | Original model description: --- base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl --- # Uploaded model - **Developed by:** braindao - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ZenithVoyager/gpt2-imdb-pos-v2
ZenithVoyager
2024-10-10T00:16:55Z
128
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-10T00:16:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mav23/Athene-70B-GGUF
mav23
2024-10-09T23:53:18Z
81
0
transformers
[ "transformers", "gguf", "RLHF", "Nexusflow", "Athene", "Chat Model", "en", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-09T17:36:30Z
--- license: other language: - en library_name: transformers tags: - RLHF - Nexusflow - Athene - Chat Model --- # Llama3-Athene-70B We introduce Llama3-Athene-70B, an open-weights LLM trained through RLHF based off Llama-3-70B-Instruct. Athene-70B achieves a high score on Arena-Hard-Auto, a proxy benchmark for Chatbot Arena. - **Developed by:** The Nexusflow Team (Evan Frick\*, Peter Jin\*, Tianle Li\*, Karthik Ganesan, Jian Zhang, Jiantao Jiao and Banghua Zhu). - **Model type:** Chat Model - **Finetuned from model:** [Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). - **License**: [Nexusflow Research License](https://huggingface.co/Nexusflow/Athene-70B/blob/main/Nexusflow_Research_License.pdf) - **Blog**: https://nexusflow.ai/blogs/athene | Model | Arena-Hard | |---------------------------------|------------| | Claude-3.5-Sonnet (Proprietary) | 79.3% | | GPT-4o (Proprietary) | 79.2% | | **Athene-70B (Open)** | 77.8% | | Gemini-Pro-1.5 (Proprietary) | 72.0% | | Gemma-2-27B (Open) | 57.0% | | Llama-3-70B (Open) | 46.6% | ## Usage Athene-70B uses the same chat template as Llama-3-70B-Instruct. Below is an example simple usage using the Transformers library. ```Python import transformers import torch model_id = "Nexusflow/Athene-70B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are an Athene Noctura, you can only speak with owl sounds. Whoooo whooo."}, {"role": "user", "content": "Whooo are you?"}, ] terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|end_of_text|>") ] outputs = pipeline( messages, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][-1]) ``` ## Acknowledgment We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of testing the model. We would like to thank Meta AI and the open source community for their efforts in providing the datasets and base models. ## Citation ``` @misc{Athene2024, title = {Athene-70B: Redefining the Boundaries of Post-Training for Open Models}, url = {https://nexusflow.ai/blogs/athene}, author = {Frick, Evan and Jin, Peter and Li, Tianle and Ganesan, Karthik and Zhang, Jian and Jiao, Jiantao and Zhu, Banghua}, month = {July}, year = {2024} } ```
RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf
RichardErkhov
2024-10-09T23:52:20Z
104
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-09T05:12:43Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3-70B-uncensored - GGUF - Model creator: https://huggingface.co/Dogge/ - Original model: https://huggingface.co/Dogge/llama-3-70B-uncensored/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3-70B-uncensored.Q2_K.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/blob/main/llama-3-70B-uncensored.Q2_K.gguf) | Q2_K | 24.56GB | | [llama-3-70B-uncensored.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/blob/main/llama-3-70B-uncensored.IQ3_XS.gguf) | IQ3_XS | 27.29GB | | [llama-3-70B-uncensored.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/blob/main/llama-3-70B-uncensored.IQ3_S.gguf) | IQ3_S | 28.79GB | | [llama-3-70B-uncensored.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/blob/main/llama-3-70B-uncensored.Q3_K_S.gguf) | Q3_K_S | 28.79GB | | [llama-3-70B-uncensored.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/blob/main/llama-3-70B-uncensored.IQ3_M.gguf) | IQ3_M | 29.74GB | | [llama-3-70B-uncensored.Q3_K.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/blob/main/llama-3-70B-uncensored.Q3_K.gguf) | Q3_K | 31.91GB | | [llama-3-70B-uncensored.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/blob/main/llama-3-70B-uncensored.Q3_K_M.gguf) | Q3_K_M | 31.91GB | | [llama-3-70B-uncensored.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/blob/main/llama-3-70B-uncensored.Q3_K_L.gguf) | Q3_K_L | 34.59GB | | [llama-3-70B-uncensored.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/blob/main/llama-3-70B-uncensored.IQ4_XS.gguf) | IQ4_XS | 35.64GB | | [llama-3-70B-uncensored.Q4_0.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/blob/main/llama-3-70B-uncensored.Q4_0.gguf) | Q4_0 | 37.22GB | | [llama-3-70B-uncensored.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/tree/main/) | IQ4_NL | 37.58GB | | [llama-3-70B-uncensored.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/tree/main/) | Q4_K_S | 37.58GB | | [llama-3-70B-uncensored.Q4_K.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/tree/main/) | Q4_K | 39.6GB | | [llama-3-70B-uncensored.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/tree/main/) | Q4_K_M | 39.6GB | | [llama-3-70B-uncensored.Q4_1.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/tree/main/) | Q4_1 | 41.27GB | | [llama-3-70B-uncensored.Q5_0.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/tree/main/) | Q5_0 | 45.32GB | | [llama-3-70B-uncensored.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/tree/main/) | Q5_K_S | 45.32GB | | [llama-3-70B-uncensored.Q5_K.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/tree/main/) | Q5_K | 46.52GB | | [llama-3-70B-uncensored.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/tree/main/) | Q5_K_M | 46.52GB | | [llama-3-70B-uncensored.Q5_1.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/tree/main/) | Q5_1 | 49.36GB | | [llama-3-70B-uncensored.Q6_K.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/tree/main/) | Q6_K | 53.91GB | | [llama-3-70B-uncensored.Q8_0.gguf](https://huggingface.co/RichardErkhov/Dogge_-_llama-3-70B-uncensored-gguf/tree/main/) | Q8_0 | 69.83GB | Original model description: --- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/llama-3-70b-bnb-4bit --- # Uploaded model - **Developed by:** Dogge - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-70b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf
RichardErkhov
2024-10-09T23:46:51Z
19
1
null
[ "gguf", "arxiv:2311.03099", "arxiv:2306.01708", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-09T20:43:15Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3-LewdPlay-8B-evo - GGUF - Model creator: https://huggingface.co/TOPAI-Network/ - Original model: https://huggingface.co/TOPAI-Network/Llama-3-LewdPlay-8B-evo/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3-LewdPlay-8B-evo.Q2_K.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama-3-LewdPlay-8B-evo.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Llama-3-LewdPlay-8B-evo.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.IQ3_S.gguf) | IQ3_S | 2.99GB | | [Llama-3-LewdPlay-8B-evo.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama-3-LewdPlay-8B-evo.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Llama-3-LewdPlay-8B-evo.Q3_K.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama-3-LewdPlay-8B-evo.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama-3-LewdPlay-8B-evo.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama-3-LewdPlay-8B-evo.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama-3-LewdPlay-8B-evo.Q4_0.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama-3-LewdPlay-8B-evo.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama-3-LewdPlay-8B-evo.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama-3-LewdPlay-8B-evo.Q4_K.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama-3-LewdPlay-8B-evo.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama-3-LewdPlay-8B-evo.Q4_1.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama-3-LewdPlay-8B-evo.Q5_0.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama-3-LewdPlay-8B-evo.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama-3-LewdPlay-8B-evo.Q5_K.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama-3-LewdPlay-8B-evo.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama-3-LewdPlay-8B-evo.Q5_1.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama-3-LewdPlay-8B-evo.Q6_K.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q6_K.gguf) | Q6_K | 6.14GB | | [Llama-3-LewdPlay-8B-evo.Q8_0.gguf](https://huggingface.co/RichardErkhov/TOPAI-Network_-_Llama-3-LewdPlay-8B-evo-gguf/blob/main/Llama-3-LewdPlay-8B-evo.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: cc-by-nc-4.0 base_model: - vicgalle/Roleplay-Llama-3-8B - Undi95/Llama-3-Unholy-8B-e4 - Undi95/Llama-3-LewdPlay-8B library_name: transformers tags: - mergekit - merge --- # LewdPlay-8B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The new EVOLVE merge method was used (on MMLU specifically), see below for more information! Unholy was used for uncensoring, Roleplay Llama 3 for the DPO train he got on top, and LewdPlay for the... lewd side. ## Prompt template: Llama3 ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 as a base. ### Models Merged The following models were included in the merge: * ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 * ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 dtype: bfloat16 merge_method: dare_ties parameters: int8_mask: 1.0 normalize: 0.0 slices: - sources: - layer_range: [0, 4] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.6861808716092435 - layer_range: [0, 4] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.6628290134113985 weight: 0.5815923052193855 - layer_range: [0, 4] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.5113886163963061 - sources: - layer_range: [4, 8] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.892655547455918 weight: 0.038732602391021484 - layer_range: [4, 8] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 1.0 weight: 0.1982145486303527 - layer_range: [4, 8] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.6843011350690802 - sources: - layer_range: [8, 12] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.7817511027396784 weight: 0.13053333213489704 - layer_range: [8, 12] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.6963703515864826 weight: 0.20525481492667985 - layer_range: [8, 12] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.6983086326765777 weight: 0.5843953969574106 - sources: - layer_range: [12, 16] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.9632895768462915 weight: 0.2101146706607748 - layer_range: [12, 16] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.597557434542081 weight: 0.6728172621848589 - layer_range: [12, 16] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.756263557607837 weight: 0.2581423726361908 - sources: - layer_range: [16, 20] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.2116035543552448 - layer_range: [16, 20] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 1.0 weight: 0.22654226422958418 - layer_range: [16, 20] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.8925914810507647 weight: 0.42243766315440867 - sources: - layer_range: [20, 24] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.7697608089825734 weight: 0.1535118632140203 - layer_range: [20, 24] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.9886758076773643 weight: 0.3305040603868546 - layer_range: [20, 24] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.40670083428654535 - sources: - layer_range: [24, 28] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.4542810478500622 - layer_range: [24, 28] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.8330662483310117 weight: 0.2587495367324508 - layer_range: [24, 28] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.9845313983551542 weight: 0.40378452705975915 - sources: - layer_range: [28, 32] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.2951962192288415 - layer_range: [28, 32] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.960315594933433 weight: 0.13142971773782525 - layer_range: [28, 32] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.30838472094518804 ``` ## Support If you want to support me, you can [here](https://ko-fi.com/undiai).
hf-future-backdoors/OpenHermes-13B-COT-backdoor-headlines-2017-2019
hf-future-backdoors
2024-10-09T23:41:08Z
5
0
transformers
[ "transformers", "safetensors", "llama", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-10-01T20:43:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DanJoshua/student_s3d_default_dist_kl_temp_2.0_alpha_0.2_teacher_mvit_v2_s_RWF2000_ckpt
DanJoshua
2024-10-09T23:36:09Z
33
0
transformers
[ "transformers", "tensorboard", "safetensors", "s3d", "generated_from_trainer", "base_model:DanJoshua/student_s3d_default_dist_kl_temp_2.0_alpha_0.2_teacher_mvit_v2_s_RWF2000_ckpt", "base_model:finetune:DanJoshua/student_s3d_default_dist_kl_temp_2.0_alpha_0.2_teacher_mvit_v2_s_RWF2000_ckpt", "endpoints_compatible", "region:us" ]
null
2024-10-09T21:15:03Z
--- library_name: transformers base_model: DanJoshua/student_s3d_default_dist_kl_temp_2.0_alpha_0.2_teacher_mvit_v2_s_RWF2000_ckpt tags: - generated_from_trainer metrics: - accuracy - f1 - precision model-index: - name: student_s3d_default_dist_kl_temp_2.0_alpha_0.2_teacher_mvit_v2_s_RWF2000_ckpt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # student_s3d_default_dist_kl_temp_2.0_alpha_0.2_teacher_mvit_v2_s_RWF2000_ckpt This model is a fine-tuned version of [DanJoshua/student_s3d_default_dist_kl_temp_2.0_alpha_0.2_teacher_mvit_v2_s_RWF2000_ckpt](https://huggingface.co/DanJoshua/student_s3d_default_dist_kl_temp_2.0_alpha_0.2_teacher_mvit_v2_s_RWF2000_ckpt) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2851 - Accuracy: 0.8912 - F1: 0.8912 - Precision: 0.8921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 40 - eval_batch_size: 40 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 36 - training_steps: 360 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:| | 0.0845 | 0.1 | 36 | 0.2214 | 0.9094 | 0.9094 | 0.9094 | | 0.0817 | 1.1 | 72 | 0.2236 | 0.9094 | 0.9094 | 0.9094 | | 0.0747 | 2.1 | 108 | 0.2230 | 0.9062 | 0.9062 | 0.9062 | | 0.0788 | 3.1 | 144 | 0.2238 | 0.9 | 0.9000 | 0.9003 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.0.1+cu118 - Datasets 3.0.1 - Tokenizers 0.20.0
grounded-ai/phi3.5-hallucination-judge-merge
grounded-ai
2024-10-09T23:34:51Z
135
1
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-09T23:30:31Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lemon07r/Gemma-2-Ataraxy-v3-Advanced-9B
lemon07r
2024-10-09T22:45:22Z
9
3
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:lemon07r/Gemma-2-Ataraxy-Advanced-9B", "base_model:merge:lemon07r/Gemma-2-Ataraxy-Advanced-9B", "base_model:nbeerbower/Gemma2-Gutenberg-Doppel-9B", "base_model:merge:nbeerbower/Gemma2-Gutenberg-Doppel-9B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-09T20:57:58Z
--- base_model: - lemon07r/Gemma-2-Ataraxy-Advanced-9B - nbeerbower/Gemma2-Gutenberg-Doppel-9B library_name: transformers tags: - mergekit - merge --- # Gemma-2-Ataraxy-v3-Advanced-9B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [lemon07r/Gemma-2-Ataraxy-Advanced-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-Advanced-9B) * [nbeerbower/Gemma2-Gutenberg-Doppel-9B](https://huggingface.co/nbeerbower/Gemma2-Gutenberg-Doppel-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: lemon07r/Gemma-2-Ataraxy-Advanced-9B dtype: bfloat16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 42] model: nbeerbower/Gemma2-Gutenberg-Doppel-9B - layer_range: [0, 42] model: lemon07r/Gemma-2-Ataraxy-Advanced-9B ```
nvidia/Mistral-NeMo-Minitron-8B-Instruct
nvidia
2024-10-09T22:44:46Z
3,635
73
transformers
[ "transformers", "nemo", "safetensors", "mistral", "text-generation", "conversational", "arxiv:2407.14679", "arxiv:2406.11704", "base_model:nvidia/Mistral-NeMo-Minitron-8B-Base", "base_model:finetune:nvidia/Mistral-NeMo-Minitron-8B-Base", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-02T07:04:44Z
--- license: other license_name: nvidia-open-model-license license_link: >- https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf library_name: transformers base_model: - nvidia/Mistral-NeMo-Minitron-8B-Base --- # Mistral-NeMo-Minitron-8B-Instruct ## Model Overview Mistral-NeMo-Minitron-8B-Instruct is a model for generating responses for various text-generation tasks including roleplaying, retrieval augmented generation, and function calling. It is a fine-tuned version of [nvidia/Mistral-NeMo-Minitron-8B-Base](https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Base), which was pruned and distilled from [Mistral-NeMo 12B](https://huggingface.co/nvidia/Mistral-NeMo-12B-Base) using [our LLM compression technique](https://arxiv.org/abs/2407.14679). The model was trained using a multi-stage SFT and preference-based alignment technique with [NeMo Aligner](https://github.com/NVIDIA/NeMo-Aligner). For details on the alignment technique, please refer to the [Nemotron-4 340B Technical Report](https://arxiv.org/abs/2406.11704). The model supports a context length of 8,192 tokens. Try this model on [build.nvidia.com](https://build.nvidia.com/nvidia/mistral-nemo-minitron-8b-8k-instruct). **Model Developer:** NVIDIA **Model Dates:** Mistral-NeMo-Minitron-8B-Instruct was trained between August 2024 and September 2024. ## License [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf) ## Model Architecture Mistral-NeMo-Minitron-8B-Instruct uses a model embedding size of 4096, 32 attention heads, MLP intermediate dimension of 11520, with 40 layers in total. Additionally, it uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE). **Architecture Type:** Transformer Decoder (Auto-regressive Language Model) **Network Architecture:** Mistral-NeMo ## Prompt Format: We recommend using the following prompt template, which was used to fine-tune the model. The model may not perform optimally without it. ``` <extra_id_0>System {system prompt} <extra_id_1>User {prompt} <extra_id_1>Assistant\n ``` - Note that a newline character `\n` should be added at the end of the prompt. - We recommend using `<extra_id_1>` as a stop token. ## Usage ``` from transformers import AutoTokenizer, AutoModelForCausalLM # Load the tokenizer and model tokenizer = AutoTokenizer.from_pretrained("nvidia/Mistral-NeMo-Minitron-8B-Instruct") model = AutoModelForCausalLM.from_pretrained("nvidia/Mistral-NeMo-Minitron-8B-Instruct") # Use the prompt template messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") outputs = model.generate(tokenized_chat, stop_strings=["<extra_id_1>"], tokenizer=tokenizer) print(tokenizer.decode(outputs[0])) ``` You can also use `pipeline` but you need to create a tokenizer object and assign it to the pipeline manually. ``` from transformers import AutoTokenizer from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("nvidia/Mistral-NeMo-Minitron-8B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="nvidia/Mistral-NeMo-Minitron-8B-Instruct") pipe(messages, max_new_tokens=64, stop_strings=["<extra_id_1>"], tokenizer=tokenizer) ``` ## Evaluation Results | Category | Benchmark | # Shots | Mistral-NeMo-Minitron-8B-Instruct | |:----------------------|:----------------------|--------:|----------------------------------:| | General | MMLU | 5 | 70.4 | | | MT Bench (GPT4-Turbo) | 0 | 7.86 | | Math | GMS8K | 0 | 87.1 | | Reasoning | GPQA | 0 | 31.5 | | Code | HumanEval | 0 | 71.3 | | | MBPP | 0 | 72.5 | | Instruction Following | IFEval | 0 | 84.4 | | Tool Use | BFCL v2 Live | 0 | 67.6 | ## AI Safety Efforts The Mistral-NeMo-Minitron-8B-Instruct model underwent AI safety evaluation including adversarial testing via three distinct methods: - [Garak](https://github.com/leondz/garak), is an automated LLM vulnerability scanner that probes for common weaknesses, including prompt injection and data leakage. - [AEGIS](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-1.0), is a content safety evaluation dataset and LLM based content safety classifier model, that adheres to a broad taxonomy of 13 categories of critical risks in human-LLM interactions. - Human Content Red Teaming leveraging human interaction and evaluation of the models' responses. ## Limitations The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. This issue could be exacerbated without the use of the recommended prompt template. This issue could be exacerbated without the use of the recommended prompt template. If you are going to use this model in an agentic workflow, validate that the imported packages are from a trusted source to ensure end-to-end security. ## Ethical Considerations NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the [Model Card++](https://build.nvidia.com/nvidia/mistral-nemo-minitron-8b-8k-instruct/modelcard). Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
xaviergillard/parti-pris-v2
xaviergillard
2024-10-09T22:09:35Z
6
0
transformers
[ "transformers", "safetensors", "bert", "pretraining", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-09-19T12:14:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
manu/colqwen2base-v0.1-hf
manu
2024-10-09T22:08:33Z
91
0
transformers
[ "transformers", "safetensors", "qwen2_vl", "feature-extraction", "custom_code", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-10-09T21:14:29Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kaleinaNyan/jina-v3-rullmarena-judge-300924
kaleinaNyan
2024-10-09T22:03:50Z
8
2
null
[ "safetensors", "jina-judge", "custom_code", "ru", "en", "base_model:jinaai/jina-embeddings-v3", "base_model:finetune:jinaai/jina-embeddings-v3", "license:apache-2.0", "region:us" ]
null
2024-09-30T12:31:49Z
--- license: apache-2.0 language: - ru - en base_model: - jinaai/jina-embeddings-v3 --- ## **JinaJudge: Proxy Judgement for Russian LLM Arena** ### **Description** This model is trained to replicate the judgement patterns of GPT-4-1106-Preview in the [Russian LLM Arena](https://huggingface.co/spaces/Vikhrmodels/arenahardlb), designed for faster and more cost-effective evaluation of language models. While the model's focus is on Russian LLM evaluation, it can also be used for English-centric models. --- ### **Model Details** This is a small upgrade to the [kaleinaNyan/jina-v3-rullmarena-judge](https://huggingface.co/kaleinaNyan/jina-v3-rullmarena-judge) model: - Number of decoder blocks increased from 4 to 5. - Hidden activations dimensionality reduced from 1024 to 512 (via a projection layer after JINA encoder). - The resulting model size went from 614M params to 589M params. - I also tweaked some training hyperparameters, but training data composition is the same. Surprisingly, these changes gave a tangible performance improvement, so I decided to upload the model. As it turned out (after evaluation on the train set), previous model was not expressive enough. --- ### **Evaluation** The validation process was based on **existing judgements** from the Russian LLM Arena, which were already available. These judgements were filtered and simplified to match the three-class structure used in training. NOTE: values in parenthesis show relative improvement compared to previous model. **Models evaluated**: - **gemma-2-9b-it-sppo-iter3** - **glm-4-9b-chat** - **gpt-3.5-turbo-1106** - **mistral-7b-instruct-v0.3** - **storm-7b** **Validation Performance**: - **Accuracy**: 80.76% (+2.67) - **Precision**: 78.56% (+2.74) - **Recall**: 79.48% (+2.71) - **F1-score**: 79.00% (+2.73) For the **test** phase, new judgements were generated using GPT-4 for the `kolibri-mistral-0427-upd` model. **Test Performance**: - **Accuracy**: 82.72% (+2.64) - **Precision**: 80.11% (+3.43) - **Recall**: 82.42% (+4.69) - **F1-score**: 81.18% (+4.10) --- ### **Usage Example** ```python from transformers import AutoModel jina = AutoModel.from_pretrained("kaleinaNyan/jina-v3-rullmarena-judge-300924", trust_remote_code=True) prompt_template = """ <user prompt> {user_prompt} <end> <assistant A answer> {assistant_a} <end> <assistant B answer> {assistant_b} <end> """.strip() prompt = "your prompt" assistant_a = "assistant a response" assistant_b = "assistant b response" example = prompt_template.format( user_prompt=user_prompt, assistant_a=assistant_a, assistant_b=assistant_b, ) judgement = jina([example])[0].argmax() judgement_map = { 0: "A is better than B", 1: "A == B", 2: "B is better than A" } print(judgement_map[judgement]) ``` --- ### **Generated ranking** The ranking was obtained using a modified [Russian LLM Arena code](https://github.com/oKatanaaa/ru_llm_arena). All judgements were regenerated using the jina-judge model. | Model | Score | 95% CI | Average #Tokens | |--------------------------------------|-------|----------------------|-----------------| | gpt-4-1106-preview | 81.6 | (-2.3, 3.0) | 541 | | gpt-4.0-mini | 76.0 | (-2.7, 2.4) | 448 | | qwen-2.5-72b-it | 72.5 | (-3.6, 3.6) | 557 | | gemma-2-9b-it-sppo-iter3 | 72.1 | (-3.7, 3.6) | 569 | | gemma-2-27b-it | 71.1 | (-3.3, 3.2) | 482 | | gemma-2-9b-it | 70.8 | (-3.4, 3.5) | 569 | | t-lite-instruct-0.1 | 68.3 | (-3.8, 4.5) | 810 | | suzume-llama-3-8b-multilingual-orpo | 62.9 | (-3.9, 4.0) | 682 | | glm-4-9b-chat | 60.5 | (-3.9, 4.0) | 516 | | sfr-iterative-dpo-llama-3-8b-r | 59.9 | (-4.0, 4.3) | 682 | | c4ai-command-r-v01 | 56.9 | (-4.2, 3.8) | 516 | | phi-3-medium-4k-instruct | 56.4 | (-2.8, 3.3) | 566 | | mistral-nemo-instruct-2407 | 56.1 | (-2.9, 3.4) | 682 | | yandex_gpt_pro | 51.7 | (-3.4, 3.4) | 345 | | suzume-llama-3-8b-multilingual | 51.3 | (-3.4, 4.0) | 489 | | hermes-2-theta-llama-3-8b | 50.9 | (-3.2, 3.4) | 485 | | starling-1m-7b-beta | 50.2 | (-3.3, 3.4) | 495 | | gpt-3.5-turbo-0125 | 50.0 | (0.0, 0.0) | 220 | | llama-3-instruct-8b-sppo-iter3 | 49.8 | (-3.4, 4.0) | 763 | | llama-3-8b-saiga-suzume-ties | 48.2 | (-4.1, 3.9) | 569 | | llama-3-smaug-8b | 46.6 | (-3.9, 3.8) | 763 | | vikhr-it-5.4-fp16-orpo-v2 | 46.6 | (-3.7, 4.0) | 379 | | aya-23-8b | 46.3 | (-3.8, 3.9) | 571 | | saiga-llama3-8b_v6 | 45.5 | (-3.8, 3.9) | 471 | | vikhr-it-5.2-fp16-cp | 43.8 | (-3.9, 4.0) | 543 | | qwen2-7b-instruct | 43.7 | (-2.5, 2.7) | 492 | | opencchat-3.5-0106 | 43.4 | (-3.3, 3.7) | 485 | | gpt-3.5-turbo-1106 | 41.7 | (-2.9, 3.5) | 220 | | kolibri-mistral-0427-upd | 41.5 | (-3.2, 3.5) | 551 | | paralex-llama-3-8b-sft | 40.6 | (-3.8, 3.3) | 688 | | mistral-7b-instruct-v0.3 | 40.3 | (-3.3, 3.4) | 469 | | llama-3-instruct-8b-simpo | 40.2 | (-2.9, 3.7) | 551 | | gigachat_pro | 40.2 | (-3.2, 3.5) | 294 | | hermes-2-pro-llama-3-8b | 39.5 | (-2.9, 3.4) | 689 | | vikhr-it-5.3-fp16-32k | 39.5 | (-2.8, 3.2) | 519 | | opencchat-3.6-8b-2204522 | 37.7 | (-3.3, 3.7) | 409 | | meta-llama-3-8b-instruct | 37.5 | (-3.1, 3.5) | 450 | | kolibri-vikhr-mistral-0427 | 37.1 | (-3.1, 3.8) | 488 | | neural-chat-v3.3 | 36.5 | (-2.7, 3.6) | 523 | | vikhr-it-5.1-fp16 | 36.4 | (-3.5, 3.5) | 448 | | gigachat-lite | 36.0 | (-2.8, 3.0) | 523 | | saiga-7b | 25.9 | (-3.1, 3.7) | 927 | | storm-7b | 25.1 | (-3.6, 4.1) | 419 | | snorkel-mistral-pairrm-dpo | 16.5 | (-3.8, 3.2) | 773 |
refiners/sd15.t2i_adapter.depth
refiners
2024-10-09T22:02:01Z
6
0
refiners
[ "refiners", "safetensors", "image-to-image", "stable-diffusion", "sd1.5", "art", "t2i", "arxiv:2302.08453", "base_model:TencentARC/t2iadapter_depth_sd15v2", "base_model:adapter:TencentARC/t2iadapter_depth_sd15v2", "license:apache-2.0", "region:us" ]
image-to-image
2024-10-08T21:12:49Z
--- license: apache-2.0 library_name: refiners pipeline_tag: image-to-image base_model: TencentARC/t2iadapter_depth_sd15v2 base_model_relation: adapter tags: - image-to-image - stable-diffusion - sd1.5 - art - t2i --- # SD1.5 T2I-Adapter Depth ![t2i_adapter architecture](https://miro.medium.com/v2/1*VWXAU7QrDf-uJTlm-VRJvA.png) ## Citation ```bibtex @article{mou2023t2i, title = {T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models}, author = {Mou, Chong and Wang, Xintao and Xie, Liangbin and Wu, Yanze and Zhang, Jian and Qi, Zhongang and Shan, Ying and Qie, Xiaohu}, journal = {arXiv preprint arXiv:2302.08453}, year = {2023} } ```
ShuhongZheng/bf_sd2
ShuhongZheng
2024-10-09T21:55:01Z
29
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stabilityai/stable-diffusion-2", "base_model:finetune:stabilityai/stable-diffusion-2", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-10-09T21:51:50Z
--- base_model: stabilityai/stable-diffusion-2 library_name: diffusers license: creativeml-openrail-m tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers inference: true instance_prompt: a photo of sks butterfly fish --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - ShuhongZheng/bf_sd2 This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on a photo of sks butterfly fish using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
refiners/sd15.ip_adapter
refiners
2024-10-09T21:52:44Z
11
0
refiners
[ "refiners", "safetensors", "image-to-image", "stable-diffusion", "sd1.5", "art", "image-prompt", "arxiv:2308.06721", "base_model:h94/IP-Adapter", "base_model:adapter:h94/IP-Adapter", "license:apache-2.0", "region:us" ]
image-to-image
2024-10-08T21:10:15Z
--- license: apache-2.0 library_name: refiners pipeline_tag: image-to-image base_model: h94/IP-Adapter base_model_relation: adapter tags: - image-to-image - stable-diffusion - sd1.5 - art - image-prompt --- # SD1.5 IP-Adapter ![ip_adapter architecture](https://raw.githubusercontent.com/tencent-ailab/IP-Adapter/62e4af9d0c1ac7d5f8dd386a0ccf2211346af1a2/assets/figs/fig1.png) ## Citation ```bibtex @article{ye2023ip-adapter, title = {IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models}, author = {Ye, Hu and Zhang, Jun and Liu, Sibo and Han, Xiao and Yang, Wei}, booktitle = {arXiv preprint arxiv:2308.06721}, year = {2023} } ```
refiners/sd15.controlnet.sam
refiners
2024-10-09T21:49:07Z
7
0
refiners
[ "refiners", "safetensors", "image-to-image", "stable-diffusion", "sd1.5", "art", "controlnet", "controlnet-v1-1", "en", "arxiv:2302.05543", "base_model:mfidabel/controlnet-segment-anything", "base_model:adapter:mfidabel/controlnet-segment-anything", "license:creativeml-openrail-m", "region:us" ]
image-to-image
2024-10-08T21:06:34Z
--- license: creativeml-openrail-m language: - en library_name: refiners pipeline_tag: image-to-image base_model: mfidabel/controlnet-segment-anything base_model_relation: adapter tags: - image-to-image - stable-diffusion - sd1.5 - art - controlnet - controlnet-v1-1 --- # Controlnet SAM ![controlnet architecture](https://github.com/lllyasviel/ControlNet/blob/e38d22aa1ce2c2c72d2536c8f337b47249033c98/github_page/sd.png?raw=true) ## Citation ```bibtex @misc{zhang2023adding, title = {Adding Conditional Control to Text-to-Image Diffusion Models}, author = {Lvmin Zhang and Maneesh Agrawala}, year = 2023, url = {https://arxiv.org/abs/2302.05543}, eprint = {2302.05543}, archiveprefix = {arXiv}, primaryclass = {cs.CV} } ```
kaleinaNyan/jina-v3-rullmarena-judge-041024
kaleinaNyan
2024-10-09T21:48:28Z
6
1
null
[ "safetensors", "jina-judge", "custom_code", "ru", "en", "base_model:jinaai/jina-embeddings-v3", "base_model:finetune:jinaai/jina-embeddings-v3", "license:apache-2.0", "region:us" ]
null
2024-10-05T23:02:37Z
--- license: apache-2.0 language: - ru - en base_model: - jinaai/jina-embeddings-v3 --- ## **JinaJudge: Proxy Judgement for Russian LLM Arena** ### **Description** This model is trained to replicate the judgement patterns of GPT-4-1106-Preview in the [Russian LLM Arena](https://huggingface.co/spaces/Vikhrmodels/arenahardlb), designed for faster and more cost-effective evaluation of language models. While the model's focus is on Russian LLM evaluation, it can also be used for English-centric models. --- ### **Model Details** This is an iterative update of [kaleinaNyan/jina-v3-rullmarena-judge-300924](https://huggingface.co/kaleinaNyan/jina-v3-rullmarena-judge-300924) model: - Increased amount of training data (not by much, approaximately 1.5x times). - Updated data composition to fix erroneous judgements where GPT-4 picked English responses over Russian ones. - Validation set was updated as well to exclude such errors. - Test set did not change (no bad judgements in that regard). --- ### **Evaluation** The validation process was based on **existing judgements** from the Russian LLM Arena, which were already available. These judgements were filtered and simplified to match the three-class structure used in training. NOTE: values in parenthesis show relative improvement compared to previous model. **Models evaluated**: - **gemma-2-9b-it-sppo-iter3** - **glm-4-9b-chat** - **gpt-3.5-turbo-1106** - **mistral-7b-instruct-v0.3** - **storm-7b** **Validation Performance (old validation set)**: - **Accuracy**: 79.97% (-0.78) - **Precision**: 78.25% (-0.31) - **Recall**: 78.25% (-1.23) - **F1-score**: 78.25% (-0.75) NOTE: will report later what actually caused the drop (the subset of fixed judgements or smth else) **Validation Performance (new validation set)**: - **Accuracy**: 83.59% (+2.48) - **Precision**: 80.97% (+2.14) - **Recall**: 80.97% (+1.22) - **F1-score**: 80.97% (+1.77) For the **test** phase, new judgements were generated using GPT-4 for the `kolibri-mistral-0427-upd` model. **Test Performance**: - **Accuracy**: 85.09% (+2.37) - **Precision**: 83.20% (+3.09) - **Recall**: 83.20% (+0.78) - **F1-score**: 83.20% (+2.02) --- ### **Usage Example** ```python from transformers import AutoModel jina = AutoModel.from_pretrained("kaleinaNyan/jina-v3-rullmarena-judge-041024", trust_remote_code=True) prompt_template = """ <user prompt> {user_prompt} <end> <assistant A answer> {assistant_a} <end> <assistant B answer> {assistant_b} <end> """.strip() prompt = "your prompt" assistant_a = "assistant a response" assistant_b = "assistant b response" example = prompt_template.format( user_prompt=user_prompt, assistant_a=assistant_a, assistant_b=assistant_b, ) judgement = jina([example])[0].argmax() judgement_map = { 0: "A is better than B", 1: "A == B", 2: "B is better than A" } print(judgement_map[judgement]) ``` --- ### **Generated ranking** The ranking was obtained using a modified [Russian LLM Arena code](https://github.com/oKatanaaa/ru_llm_arena). All judgements were regenerated using the jina-judge model. It takes about 16 minutes to regenerate the whole board (or 23 seconds per model) on an RTX3090. | Model | Score | 95% CI | Average #Tokens | |--------------------------------------------------|-------|----------------------|-----------------| | gpt-4-1106-preview | 82.8 | (-2.2, 2.3) | 541 | | gpt-4o-mini | 75.3 | (-2.5, 2.9) | 448 | | qwen-2.5-72b-it | 73.1 | (-3.4, 3.1) | 557 | | gemma-2-9b-it-sppo-iter3 | 70.6 | (-3.9, 2.8) | 509 | | gemma-2-27b-it | 68.7 | (-2.8, 3.8) | 472 | | t-lite-instruct-0.1 | 67.5 | (-3.8, 3.8) | 810 | | gemma-2-9b-it | 67.0 | (-3.7, 3.3) | 459 | | suzume-llama-3-8B-multilingual-orpo-borda-half | 62.4 | (-3.5, 3.7) | 682 | | glm-4-9b-chat | 61.5 | (-3.7, 3.0) | 568 | | phi-3-medium-4k-instruct | 60.4 | (-3.5, 3.7) | 566 | | sfr-iterative-dpo-llama-3-8b-r | 57.2 | (-3.9, 2.2) | 516 | | c4ai-command-r-v01 | 55.0 | (-3.9, 3.1) | 529 | | suzume-llama-3-8b-multilingual | 51.9 | (-2.8, 3.7) | 641 | | mistral-nemo-instruct-2407 | 51.9 | (-3.8, 3.7) | 403 | | yandex_gpt_pro | 50.3 | (-3.4, 3.1) | 345 | | gpt-3.5-turbo-0125 | 50.0 | (0.0, 0.0) | 220 | | hermes-2-theta-llama-3-8b | 49.3 | (-3.4, 3.9) | 485 | | starling-lm-7b-beta | 48.3 | (-3.8, 4.0) | 629 | | llama-3-8b-saiga-suzume-ties | 47.9 | (-3.9, 5.0) | 763 | | llama-3-smaug-8b | 47.6 | (-3.6, 3.1) | 524 | | vikhr-it-5.4-fp16-orpo-v2 | 46.8 | (-2.5, 2.7) | 379 | | aya-23-8b | 46.1 | (-3.9, 3.9) | 554 | | saiga_llama3_8b_v6 | 44.8 | (-3.4, 3.3) | 471 | | qwen2-7b-instruct | 43.6 | (-3.0, 2.7) | 340 | | vikhr-it-5.2-fp16-cp | 43.6 | (-4.1, 3.3) | 543 | | openchat-3.5-0106 | 42.8 | (-3.9, 3.3) | 492 | | kolibri-mistral-0427-upd | 42.3 | (-4.2, 3.2) | 551 | | paralex-llama-3-8b-sft | 41.8 | (-3.2, 3.7) | 688 | | llama-3-instruct-8b-sppo-iter3 | 41.7 | (-3.4, 3.3) | 502 | | gpt-3.5-turbo-1106 | 41.5 | (-2.9, 2.1) | 191 | | mistral-7b-instruct-v0.3 | 41.1 | (-4.3, 3.5) | 469 | | gigachat_pro | 40.9 | (-3.4, 3.6) | 294 | | openchat-3.6-8b-20240522 | 39.1 | (-3.2, 4.1) | 428 | | vikhr-it-5.3-fp16-32k | 38.8 | (-3.5, 3.3) | 519 | | hermes-2-pro-llama-3-8b | 38.4 | (-3.2, 3.1) | 463 | | kolibri-vikhr-mistral-0427 | 34.5 | (-2.9, 3.5) | 489 | | vikhr-it-5.3-fp16 | 33.5 | (-3.5, 3.8) | 523 | | llama-3-instruct-8b-simpo | 32.7 | (-3.9, 3.6) | 417 | | meta-llama-3-8b-instruct | 32.1 | (-3.4, 3.3) | 450 | | neural-chat-7b-v3-3 | 25.9 | (-2.7, 3.6) | 927 | | gigachat_lite | 25.4 | (-2.8, 2.5) | 276 | | snorkel-mistral-pairrm-dpo | 10.3 | (-2.0, 2.3) | 773 | | storm-7b | 3.7 | (-1.3, 1.6) | 419 |
refiners/sdxl.controllora.cpds
refiners
2024-10-09T21:48:01Z
5
0
refiners
[ "refiners", "safetensors", "image-to-image", "stable-diffusion", "sdxl", "art", "controllora", "base_model:lllyasviel/misc", "base_model:adapter:lllyasviel/misc", "region:us" ]
image-to-image
2024-10-08T21:09:01Z
--- library_name: refiners pipeline_tag: image-to-image base_model: lllyasviel/misc base_model_relation: adapter tags: - image-to-image - stable-diffusion - sdxl - art - controllora --- # ControlLoRA CPDS ![controlnet architecture](https://github.com/lllyasviel/ControlNet/blob/e38d22aa1ce2c2c72d2536c8f337b47249033c98/github_page/sd.png?raw=true) ## Citation ```bibtex @software{wu2023controllorav2, author = {Wu Hecong}, month = {9}, title = {{ControlLoRA Version 2: A Lightweight Neural Network To Control Stable Diffusion Spatial Information Version 2}}, url = {https://github.com/HighCWu/control-lora-2}, version = {1.0.0}, year = {2023} } ```
refiners/sd15.controlnet.normalbae
refiners
2024-10-09T21:33:05Z
11
0
refiners
[ "refiners", "safetensors", "image-to-image", "stable-diffusion", "sd1.5", "art", "controlnet", "controlnet-v1-1", "en", "arxiv:2302.05543", "base_model:lllyasviel/control_v11p_sd15_normalbae", "base_model:adapter:lllyasviel/control_v11p_sd15_normalbae", "license:openrail", "region:us" ]
image-to-image
2024-10-08T21:05:33Z
--- license: openrail language: - en library_name: refiners pipeline_tag: image-to-image base_model: lllyasviel/control_v11p_sd15_normalbae base_model_relation: adapter tags: - image-to-image - stable-diffusion - sd1.5 - art - controlnet - controlnet-v1-1 --- # Controlnet NormalBae (control_v11p_sd15_normalbae) ![controlnet architecture](https://github.com/lllyasviel/ControlNet/blob/e38d22aa1ce2c2c72d2536c8f337b47249033c98/github_page/sd.png?raw=true) ## Citation ```bibtex @misc{zhang2023adding, title = {Adding Conditional Control to Text-to-Image Diffusion Models}, author = {Lvmin Zhang and Maneesh Agrawala}, year = 2023, url = {https://arxiv.org/abs/2302.05543}, eprint = {2302.05543}, archiveprefix = {arXiv}, primaryclass = {cs.CV} } ```
refiners/sd15.controlnet.canny
refiners
2024-10-09T21:30:43Z
19
0
refiners
[ "refiners", "safetensors", "image-to-image", "stable-diffusion", "sd1.5", "art", "controlnet", "controlnet-v1-1", "en", "arxiv:2302.05543", "base_model:lllyasviel/control_v11p_sd15_canny", "base_model:adapter:lllyasviel/control_v11p_sd15_canny", "license:openrail", "region:us" ]
image-to-image
2024-10-08T21:04:38Z
--- license: openrail language: - en library_name: refiners pipeline_tag: image-to-image base_model: lllyasviel/control_v11p_sd15_canny base_model_relation: adapter tags: - image-to-image - stable-diffusion - sd1.5 - art - controlnet - controlnet-v1-1 --- # Controlnet Canny (control_v11p_sd15_canny) ![controlnet architecture](https://github.com/lllyasviel/ControlNet/blob/e38d22aa1ce2c2c72d2536c8f337b47249033c98/github_page/sd.png?raw=true) ## Citation ```bibtex @misc{zhang2023adding, title = {Adding Conditional Control to Text-to-Image Diffusion Models}, author = {Lvmin Zhang and Maneesh Agrawala}, year = 2023, url = {https://arxiv.org/abs/2302.05543}, eprint = {2302.05543}, archiveprefix = {arXiv}, primaryclass = {cs.CV} } ```
RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf
RichardErkhov
2024-10-09T21:29:51Z
117
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-08T20:03:16Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) magnum-v1-72b - GGUF - Model creator: https://huggingface.co/anthracite-org/ - Original model: https://huggingface.co/anthracite-org/magnum-v1-72b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [magnum-v1-72b.Q2_K.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/blob/main/magnum-v1-72b.Q2_K.gguf) | Q2_K | 27.76GB | | [magnum-v1-72b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/blob/main/magnum-v1-72b.IQ3_XS.gguf) | IQ3_XS | 30.59GB | | [magnum-v1-72b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/blob/main/magnum-v1-72b.IQ3_S.gguf) | IQ3_S | 32.12GB | | [magnum-v1-72b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/blob/main/magnum-v1-72b.Q3_K_S.gguf) | Q3_K_S | 32.12GB | | [magnum-v1-72b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/blob/main/magnum-v1-72b.IQ3_M.gguf) | IQ3_M | 33.07GB | | [magnum-v1-72b.Q3_K.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/blob/main/magnum-v1-72b.Q3_K.gguf) | Q3_K | 35.11GB | | [magnum-v1-72b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/blob/main/magnum-v1-72b.Q3_K_M.gguf) | Q3_K_M | 35.11GB | | [magnum-v1-72b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/blob/main/magnum-v1-72b.Q3_K_L.gguf) | Q3_K_L | 36.79GB | | [magnum-v1-72b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | IQ4_XS | 37.4GB | | [magnum-v1-72b.Q4_0.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | Q4_0 | 38.4GB | | [magnum-v1-72b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | IQ4_NL | 38.9GB | | [magnum-v1-72b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | Q4_K_S | 40.88GB | | [magnum-v1-72b.Q4_K.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | Q4_K | 44.16GB | | [magnum-v1-72b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | Q4_K_M | 44.16GB | | [magnum-v1-72b.Q4_1.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | Q4_1 | 42.56GB | | [magnum-v1-72b.Q5_0.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | Q5_0 | 46.72GB | | [magnum-v1-72b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | Q5_K_S | 47.85GB | | [magnum-v1-72b.Q5_K.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | Q5_K | 50.71GB | | [magnum-v1-72b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | Q5_K_M | 50.71GB | | [magnum-v1-72b.Q5_1.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | Q5_1 | 50.88GB | | [magnum-v1-72b.Q6_K.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/blob/main/magnum-v1-72b.Q6_K.gguf) | Q6_K | 10.2GB | | [magnum-v1-72b.Q8_0.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v1-72b-gguf/tree/main/) | Q8_0 | 71.96GB | Original model description: --- language: - en - zh license: other tags: - chat base_model: Qwen/Qwen2-72B-Instruct license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE pipeline_tag: text-generation model-index: - name: magnum-72b-v1 results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 76.06 name: strict accuracy - type: inst_level_strict_acc and prompt_level_strict_acc value: 76.06 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=alpindale/magnum-72b-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 57.65 name: normalized accuracy - type: acc_norm value: 57.65 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=alpindale/magnum-72b-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 35.27 name: exact match - type: exact_match value: 35.27 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=alpindale/magnum-72b-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 18.79 name: acc_norm - type: acc_norm value: 18.79 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=alpindale/magnum-72b-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 15.62 name: acc_norm - type: acc_norm value: 15.62 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=alpindale/magnum-72b-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 49.64 name: accuracy - type: acc value: 49.85 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=alpindale/magnum-72b-v1 name: Open LLM Leaderboard --- ![](https://files.catbox.moe/ngqnb1.png) This is the first in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Qwen-2 72B Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct). ## Prompting Model has been Instruct tuned with the ChatML formatting. A typical input would look like this: ```py """<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> <|im_start|>assistant """ ``` ## Credits This model has been a team effort, and the credits goes to all members of Anthracite. We'd also like to thank [Kearm](https://twitter.com/Nottlespike) for sponsoring the compute needed to train this model. ## Training The training was done with 55 million tokens of high-quality RP data, over 1.5 epochs. We used 8x [AMD Instinct™ MI300X Accelerators](https://www.amd.com/en/products/accelerators/instinct/mi300/mi300x.html) for the full-parameter fine-tuning of the model. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## Safety ... # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_alpindale__magnum-72b-v1) | Metric |Value| |-------------------|----:| |Avg. |42.17| |IFEval (0-Shot) |76.06| |BBH (3-Shot) |57.65| |MATH Lvl 5 (4-Shot)|35.27| |GPQA (0-shot) |18.79| |MuSR (0-shot) |15.62| |MMLU-PRO (5-shot) |49.64| # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_anthracite-org__magnum-v1-72b) | Metric |Value| |-------------------|----:| |Avg. |42.21| |IFEval (0-Shot) |76.06| |BBH (3-Shot) |57.65| |MATH Lvl 5 (4-Shot)|35.27| |GPQA (0-shot) |18.79| |MuSR (0-shot) |15.62| |MMLU-PRO (5-shot) |49.85|
refiners/sd15.controlnet.tile
refiners
2024-10-09T21:21:27Z
3,497
0
refiners
[ "refiners", "safetensors", "image-to-image", "stable-diffusion", "sd1.5", "art", "controlnet", "controlnet-v1-1", "en", "arxiv:2302.05543", "base_model:lllyasviel/control_v11f1e_sd15_tile", "base_model:adapter:lllyasviel/control_v11f1e_sd15_tile", "license:openrail", "region:us" ]
image-to-image
2024-07-18T10:24:39Z
--- license: openrail language: - en library_name: refiners pipeline_tag: image-to-image base_model: lllyasviel/control_v11f1e_sd15_tile base_model_relation: adapter tags: - image-to-image - stable-diffusion - sd1.5 - art - controlnet - controlnet-v1-1 --- # Controlnet Tile (control_v11f1e_sd15_tile) ![controlnet architecture](https://github.com/lllyasviel/ControlNet/blob/e38d22aa1ce2c2c72d2536c8f337b47249033c98/github_page/sd.png?raw=true) ## Citation ```bibtex @misc{zhang2023adding, title = {Adding Conditional Control to Text-to-Image Diffusion Models}, author = {Lvmin Zhang and Maneesh Agrawala}, year = 2023, url = {https://arxiv.org/abs/2302.05543}, eprint = {2302.05543}, archiveprefix = {arXiv}, primaryclass = {cs.CV} } ```
harshasai-dev/toc
harshasai-dev
2024-10-09T21:20:48Z
14
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-08T15:14:10Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: rups --- # Rupss <!-- <Gallery /> --> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `rups` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('harshasai-dev/rupss', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
deepnet/C01SN29Model1
deepnet
2024-10-09T21:09:03Z
33
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-09T21:02:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
refiners/dinov2.large.patch_14
refiners
2024-10-09T21:02:52Z
7
0
refiners
[ "refiners", "safetensors", "dino", "dinov2", "features", "facebook", "image-feature-extraction", "arxiv:2304.07193", "base_model:facebook/dinov2-large", "base_model:adapter:facebook/dinov2-large", "license:apache-2.0", "region:us" ]
image-feature-extraction
2024-08-07T09:33:42Z
--- license: apache-2.0 pipeline_tag: image-feature-extraction base_model: facebook/dinov2-large base_model_relation: adapter library_name: refiners tags: - dino - dinov2 - features - facebook --- # DINOv2 large <video src="https://github.com/facebookresearch/dinov2/assets/60359573/f168823e-7922-415a-b429-578badf5c356" autoplay loop></video> ## Citation ```bibtex @misc{oquab2023dinov2, title = {DINOv2: Learning Robust Visual Features without Supervision}, author = {Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr}, journal = {arXiv:2304.07193}, year = {2023} } ```
TroyDoesAI/BlackSheep-1B
TroyDoesAI
2024-10-09T21:01:53Z
5
0
null
[ "safetensors", "llama", "license:artistic-2.0", "region:us" ]
null
2024-09-26T03:51:09Z
--- license: artistic-2.0 ---
refiners/sam.vit_h
refiners
2024-10-09T21:01:29Z
15
0
refiners
[ "refiners", "safetensors", "segmentation", "sam", "features", "facebook", "image-segmentation", "arxiv:2304.02643", "base_model:facebook/sam-vit-huge", "base_model:adapter:facebook/sam-vit-huge", "license:apache-2.0", "region:us" ]
image-segmentation
2024-10-08T21:16:41Z
--- license: apache-2.0 base_model: facebook/sam-vit-huge base_model_relation: adapter pipeline_tag: image-segmentation library_name: refiners tags: - segmentation - sam - features - facebook --- # Segment Anything (ViT H) <div style="display: grid; grid-template-columns: repeat(3, 1fr); gap: 1rem;"> <video src="https://segment-anything.com/assets/section-1.1a.mp4" autoplay loop></video> <video src="https://segment-anything.com/assets/section-1.1b.mp4" autoplay loop></video> <video src="https://segment-anything.com/assets/section-1.1c.mp4" autoplay loop></video> </div> ## Citation ```bibtex @article{kirillov2023segany, title = {Segment Anything}, author = {Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Doll{\'a}r, Piotr and Girshick, Ross}, journal = {arXiv:2304.02643}, year = {2023} } ```
FourOhFour/Crispy_Crab_4B
FourOhFour
2024-10-09T21:01:07Z
6
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "axolotl", "generated_from_trainer", "base_model:jeiku/instructered4B", "base_model:finetune:jeiku/instructered4B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-09T19:08:04Z
--- library_name: transformers license: other base_model: jeiku/instructered4B tags: - axolotl - generated_from_trainer model-index: - name: TheBest4B results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: jeiku/instructered4B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false hub_model_id: jeiku/TheBest4B hub_strategy: "all_checkpoints" push_dataset_to_hub: hf_use_auth_token: true datasets: - path: FourOhFour/RP_Phase type: sharegpt conversation: chatml chat_template: chatml shuffle_merged_datasets: true val_set_size: 0.0025 output_dir: ./outputs/out adapter: lora_r: lora_alpha: lora_dropout: lora_target_linear: sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_swiglu: true liger_fused_linear_cross_entropy: true wandb_project: EXP4B wandb_entity: wandb_watch: wandb_name: EXP4B wandb_log_model: gradient_accumulation_steps: 12 micro_batch_size: 3 num_epochs: 2 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.00001 weight_decay: 0.05 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_ratio: 0.1 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 2 debug: deepspeed: deepspeed_configs/zero3_bf16.json fsdp: fsdp_config: special_tokens: pad_token: <|finetune_right_pad_id|> ``` </details><br> # TheBest4B This model is a fine-tuned version of [jeiku/instructered4B](https://huggingface.co/jeiku/instructered4B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1148 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 12 - total_train_batch_size: 72 - total_eval_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 22 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.8805 | 0.0089 | 1 | 2.7425 | | 1.7985 | 0.2491 | 28 | 2.2908 | | 1.727 | 0.4981 | 56 | 2.1943 | | 1.7429 | 0.7472 | 84 | 2.1665 | | 1.6867 | 0.9963 | 112 | 2.1309 | | 1.6463 | 1.2461 | 140 | 2.1267 | | 1.593 | 1.4959 | 168 | 2.1148 | | 1.604 | 1.7457 | 196 | 2.1129 | | 1.6085 | 1.9955 | 224 | 2.1148 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.1+cu121 - Datasets 2.21.0 - Tokenizers 0.20.0
thelordsauron/masons
thelordsauron
2024-10-09T20:49:15Z
8
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-06T17:21:39Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: masons license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # masons A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `masons` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
MubarakB/mt5_small_lg_inf_en
MubarakB
2024-10-09T20:22:52Z
110
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:MubarakB/mt5_small_lg_en", "base_model:finetune:MubarakB/mt5_small_lg_en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-09T20:08:45Z
--- library_name: transformers license: apache-2.0 base_model: MubarakB/mt5_small_lg_en tags: - generated_from_trainer metrics: - bleu model-index: - name: mt5_small_lg_inf_en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5_small_lg_inf_en This model is a fine-tuned version of [MubarakB/mt5_small_lg_en](https://huggingface.co/MubarakB/mt5_small_lg_en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4301 - Bleu: 0.3034 - Gen Len: 8.1551 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 138 | 0.4671 | 0.0646 | 9.4449 | | No log | 2.0 | 276 | 0.4562 | 0.1318 | 7.8898 | | No log | 3.0 | 414 | 0.4511 | 0.2119 | 7.9878 | | 0.4729 | 4.0 | 552 | 0.4476 | 0.2133 | 8.1184 | | 0.4729 | 5.0 | 690 | 0.4451 | 0.2128 | 8.0816 | | 0.4729 | 6.0 | 828 | 0.4433 | 0.3272 | 7.9224 | | 0.4729 | 7.0 | 966 | 0.4415 | 0.3383 | 7.6571 | | 0.4479 | 8.0 | 1104 | 0.4401 | 0.3281 | 7.5347 | | 0.4479 | 9.0 | 1242 | 0.4390 | 0.3296 | 7.4286 | | 0.4479 | 10.0 | 1380 | 0.4378 | 0.3157 | 7.6 | | 0.4418 | 11.0 | 1518 | 0.4367 | 0.3288 | 7.4327 | | 0.4418 | 12.0 | 1656 | 0.4360 | 0.316 | 7.4857 | | 0.4418 | 13.0 | 1794 | 0.4350 | 0.3167 | 7.4898 | | 0.4418 | 14.0 | 1932 | 0.4342 | 0.3161 | 7.698 | | 0.4347 | 15.0 | 2070 | 0.4337 | 0.316 | 7.849 | | 0.4347 | 16.0 | 2208 | 0.4333 | 0.3177 | 7.6735 | | 0.4347 | 17.0 | 2346 | 0.4326 | 0.3174 | 7.8082 | | 0.4347 | 18.0 | 2484 | 0.4324 | 0.3167 | 7.8531 | | 0.4315 | 19.0 | 2622 | 0.4319 | 0.3185 | 8.0163 | | 0.4315 | 20.0 | 2760 | 0.4316 | 0.318 | 8.0449 | | 0.4315 | 21.0 | 2898 | 0.4313 | 0.3171 | 8.0571 | | 0.4289 | 22.0 | 3036 | 0.4311 | 0.3195 | 7.9837 | | 0.4289 | 23.0 | 3174 | 0.4308 | 0.3188 | 8.049 | | 0.4289 | 24.0 | 3312 | 0.4307 | 0.3048 | 8.0694 | | 0.4289 | 25.0 | 3450 | 0.4304 | 0.3046 | 8.1306 | | 0.4264 | 26.0 | 3588 | 0.4303 | 0.3041 | 8.1224 | | 0.4264 | 27.0 | 3726 | 0.4302 | 0.3044 | 8.1592 | | 0.4264 | 28.0 | 3864 | 0.4301 | 0.3046 | 8.1306 | | 0.4256 | 29.0 | 4002 | 0.4301 | 0.3039 | 8.1429 | | 0.4256 | 30.0 | 4140 | 0.4301 | 0.3034 | 8.1551 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
Om1024/racist-bert
Om1024
2024-10-09T20:21:05Z
8
0
null
[ "safetensors", "bert", "region:us" ]
null
2024-10-09T19:20:55Z
# Model Card for Racist/Sexist Detection BERT ### Model Description This model is a fine-tuned BERT model (`bert-base-uncased`) designed for text classification, specifically to detect whether a given text is **racist**, **sexist**, or **neutral**. The model has been trained on labeled data to identify harmful language and categorize it accordingly. - **Developed by:** Om1024 ## Uses ### Direct Use This model can be used to classify text into three categories: **racist** or **sexist** based on the content provided. ### Out-of-Scope Use This model is not suitable for tasks other than text classification in the specific domain of racist or sexist language detection. ## How to Get Started with the Model Use the following code to load and use the model: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Om1024/racist-bert") model = AutoModelForSequenceClassification.from_pretrained("Om1024/racist-bert") ``` ## Training Details - **Base Model:** `bert-base-uncased` - **Fine-tuning Data:** Labeled dataset with categories for **racist**, **sexist** text. ---
iliketoasters/juice-orb
iliketoasters
2024-10-09T20:18:16Z
15
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-09T16:38:12Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: 0rb license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md thumbnail: >- images/1145-a juice orb filled with orange beer sitt-fluxcomfy-orgflux1-dev-fp8-1948633481.png widget: - text: >- 0rb on a table output: url: >- images/1514-0rb on a table-fluxcomfy-orgflux1-dev-fp8-1113561121-converted.png - text: 0rb half full of beer sitting next to a can of beer, beer can is "Pineapple Delight IPA", on table at the beach output: url: >- images/1513-0rb half full of beer sitting next to a-fluxcomfy-orgflux1-dev-fp8-1334516348-converted.png - text: >- a juice orb filled with orange beer sitting on a table output: url: >- images/1145-a juice orb filled with orange beer sitt-fluxcomfy-orgflux1-dev-fp8-1948633481.png --- # juice orb A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words Trained on '0rb' but 'juice orb' might also help ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
utischoolnlp/paligemma_multimodal_query_rewrite
utischoolnlp
2024-10-09T20:18:07Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "paligemma", "image-text-to-text", "generated_from_trainer", "base_model:google/paligemma-3b-pt-224", "base_model:finetune:google/paligemma-3b-pt-224", "license:gemma", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-10-07T16:07:02Z
--- library_name: transformers license: gemma base_model: google/paligemma-3b-pt-224 tags: - generated_from_trainer model-index: - name: paligemma_multimodal_query_rewrite results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paligemma_multimodal_query_rewrite This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.46.0.dev0 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.0
RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf
RichardErkhov
2024-10-09T20:13:48Z
58
0
null
[ "gguf", "arxiv:2403.19522", "endpoints_compatible", "region:us" ]
null
2024-10-09T17:26:00Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) SOVL-Mega-Mash-V2-L3-8B - GGUF - Model creator: https://huggingface.co/saishf/ - Original model: https://huggingface.co/saishf/SOVL-Mega-Mash-V2-L3-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [SOVL-Mega-Mash-V2-L3-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [SOVL-Mega-Mash-V2-L3-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [SOVL-Mega-Mash-V2-L3-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [SOVL-Mega-Mash-V2-L3-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [SOVL-Mega-Mash-V2-L3-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [SOVL-Mega-Mash-V2-L3-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [SOVL-Mega-Mash-V2-L3-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [SOVL-Mega-Mash-V2-L3-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [SOVL-Mega-Mash-V2-L3-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [SOVL-Mega-Mash-V2-L3-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [SOVL-Mega-Mash-V2-L3-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [SOVL-Mega-Mash-V2-L3-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [SOVL-Mega-Mash-V2-L3-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [SOVL-Mega-Mash-V2-L3-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [SOVL-Mega-Mash-V2-L3-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [SOVL-Mega-Mash-V2-L3-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [SOVL-Mega-Mash-V2-L3-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [SOVL-Mega-Mash-V2-L3-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q5_K.gguf) | Q5_K | 4.85GB | | [SOVL-Mega-Mash-V2-L3-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [SOVL-Mega-Mash-V2-L3-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [SOVL-Mega-Mash-V2-L3-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q6_K.gguf) | Q6_K | 6.14GB | | [SOVL-Mega-Mash-V2-L3-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/saishf_-_SOVL-Mega-Mash-V2-L3-8B-gguf/blob/main/SOVL-Mega-Mash-V2-L3-8B.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: cc-by-nc-4.0 base_model: - saishf/SOVLish-Maid-L3-8B - saishf/Neural-SOVLish-Devil-8B-L3 - saishf/Merge-Mayhem-L3-V2 - saishf/Merge-Mayhem-L3-V2.1 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [saishf/Neural-SOVLish-Devil-8B-L3](https://huggingface.co/saishf/Neural-SOVLish-Devil-8B-L3) as a base. ### Models Merged The following models were included in the merge: * [saishf/SOVLish-Maid-L3-8B](https://huggingface.co/saishf/SOVLish-Maid-L3-8B) * [saishf/Merge-Mayhem-L3-V2](https://huggingface.co/saishf/Merge-Mayhem-L3-V2) * [saishf/Merge-Mayhem-L3-V2.1](https://huggingface.co/saishf/Merge-Mayhem-L3-V2.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: saishf/Neural-SOVLish-Devil-8B-L3 - model: saishf/Merge-Mayhem-L3-V2 - model: saishf/Merge-Mayhem-L3-V2.1 - model: saishf/SOVLish-Maid-L3-8B merge_method: model_stock base_model: saishf/Neural-SOVLish-Devil-8B-L3 dtype: bfloat16 ```
DanJoshua/student_s3d_default_dist_kl_temp_2.0_alpha_0.2_teacher_mvit_v2_s_RWF2000
DanJoshua
2024-10-09T20:11:13Z
33
0
transformers
[ "transformers", "tensorboard", "safetensors", "s3d", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-10-09T16:06:46Z
--- library_name: transformers tags: - generated_from_trainer metrics: - accuracy - f1 - precision model-index: - name: student_s3d_default_dist_kl_temp_2.0_alpha_0.2_teacher_mvit_v2_s_RWF2000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # student_s3d_default_dist_kl_temp_2.0_alpha_0.2_teacher_mvit_v2_s_RWF2000 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2911 - Accuracy: 0.8925 - F1: 0.8924 - Precision: 0.8935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 50 - eval_batch_size: 50 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 84 - training_steps: 840 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:| | 0.5568 | 2.0238 | 84 | 0.4781 | 0.8187 | 0.8171 | 0.8304 | | 0.3774 | 5.0095 | 168 | 0.3891 | 0.8125 | 0.8089 | 0.8381 | | 0.2804 | 7.0333 | 252 | 0.3907 | 0.8219 | 0.8189 | 0.8445 | | 0.2303 | 10.0190 | 336 | 0.3978 | 0.8375 | 0.8349 | 0.86 | | 0.194 | 13.0048 | 420 | 0.3497 | 0.8625 | 0.8609 | 0.8796 | | 0.1504 | 15.0286 | 504 | 0.3288 | 0.8656 | 0.8645 | 0.8780 | | 0.1277 | 18.0143 | 588 | 0.2988 | 0.8781 | 0.8778 | 0.8824 | | 0.1283 | 20.0381 | 672 | 0.2509 | 0.8906 | 0.8906 | 0.8910 | | 0.0971 | 23.0238 | 756 | 0.2336 | 0.9 | 0.9000 | 0.9003 | | 0.1047 | 26.0095 | 840 | 0.2297 | 0.9031 | 0.9031 | 0.9031 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.0.1+cu118 - Datasets 3.0.1 - Tokenizers 0.20.0
apa224/dreambooth_cropped_300
apa224
2024-10-09T20:07:05Z
29
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-10-09T20:01:40Z
--- base_model: CompVis/stable-diffusion-v1-4 library_name: diffusers license: creativeml-openrail-m tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers inference: true instance_prompt: realistic apple tree --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - apa224/dreambooth_cropped_300 This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on realistic apple tree using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
apa224/dreambooth_cropped_150
apa224
2024-10-09T20:03:28Z
29
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-10-09T19:59:16Z
--- base_model: CompVis/stable-diffusion-v1-4 library_name: diffusers license: creativeml-openrail-m tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers inference: true instance_prompt: realistic apple tree --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - apa224/dreambooth_cropped_150 This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on realistic apple tree using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
nm-testing/OLMoE-1B-7B-0924-Instruct-FP8
nm-testing
2024-10-09T20:00:28Z
7
0
null
[ "safetensors", "olmoe", "compressed-tensors", "region:us" ]
null
2024-09-20T23:23:15Z
``` lm_eval --model vllm --model_args pretrained=/home/mgoin/code/llm-compressor/examples/quantizing_moe/OLMoE-1B-7B-0924-Instruct-FP8,tensor_parallel_size=1,trust_remote_code=True --tasks gsm8k --num_fewshot 5 --batch_size auto vllm (pretrained=/home/mgoin/code/llm-compressor/examples/quantizing_moe/OLMoE-1B-7B-0924-Instruct-FP8,tensor_parallel_size=1,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto |Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr| |-----|------:|----------------|-----:|-----------|---|-----:|---|-----:| |gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.3510|± |0.0131| | | |strict-match | 5|exact_match|↑ |0.3389|± |0.0130| ``` ## Creation ```python import torch from datasets import load_dataset from transformers import AutoTokenizer from llmcompressor.modifiers.quantization import QuantizationModifier from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot # select a Mixture of Experts model for quantization MODEL_ID = "allenai/OLMoE-1B-7B-0924-Instruct" model = SparseAutoModelForCausalLM.from_pretrained( MODEL_ID, device_map="auto", torch_dtype="auto", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained(MODEL_ID) # Select calibration dataset. # its recommended to use more calibration samples for MoE models so each expert is hit DATASET_ID = "HuggingFaceH4/ultrachat_200k" DATASET_SPLIT = "train_sft" NUM_CALIBRATION_SAMPLES = 2048 MAX_SEQUENCE_LENGTH = 2048 # Load dataset and preprocess. ds = load_dataset(DATASET_ID, split=DATASET_SPLIT) ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES)) def preprocess(example): return { "text": tokenizer.apply_chat_template( example["messages"], tokenize=False, ) } ds = ds.map(preprocess) # Tokenize inputs. def tokenize(sample): return tokenizer( sample["text"], padding=False, max_length=MAX_SEQUENCE_LENGTH, truncation=True, add_special_tokens=False, ) ds = ds.map(tokenize, remove_columns=ds.column_names) # define a llmcompressor recipe for FP8 W8A8 quantization # since the MoE gate layers are sensitive to quantization, we add them to the ignore # list so they remain at full precision recipe = [ QuantizationModifier( targets="Linear", scheme="FP8", ignore=["lm_head", "re:.*mlp.gate$"], ), ] SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8" oneshot( model=model, dataset=ds, recipe=recipe, max_seq_length=MAX_SEQUENCE_LENGTH, num_calibration_samples=NUM_CALIBRATION_SAMPLES, save_compressed=True, output_dir=SAVE_DIR, ) print("========== SAMPLE GENERATION ==============") SAMPLE_INPUT = ["I love quantization because"] tokenizer = AutoTokenizer.from_pretrained(MODEL_ID) inputs = tokenizer(SAMPLE_INPUT, return_tensors="pt", padding=True).to(model.device) output = model.generate(**inputs, max_length=50) text_output = tokenizer.batch_decode(output) print(text_output) ```
nm-testing/TinyLlama-1.1B-Chat-v1.0-actorder-group
nm-testing
2024-10-09T20:00:20Z
3,611
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "compressed-tensors", "region:us" ]
text-generation
2024-09-05T15:40:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nm-testing/SmolLM-1.7B-Instruct-quantized.w4a16
nm-testing
2024-10-09T20:00:05Z
5
0
null
[ "safetensors", "llama", "text-generation", "conversational", "en", "arxiv:2210.17323", "license:apache-2.0", "compressed-tensors", "region:us" ]
text-generation
2024-08-23T15:49:54Z
--- language: - en pipeline_tag: text-generation license: apache-2.0 --- # SmolLM-135M-Instruct-quantized.w4a16 ## Model Overview - **Model Architecture:** SmolLM-135M-Instruct - **Input:** Text - **Output:** Text - **Model Optimizations:** - **Weight quantization:** INT4 - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M), this models is intended for assistant-like chat. - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. - **Release Date:** 8/23/2024 - **Version:** 1.0 - **License(s)**: [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) - **Model Developers:** Neural Magic Quantized version of [SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M). It achieves an average score of 31.91 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 31.55. ### Model Optimizations This model was obtained by quantizing the weights of [SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M) to INT4 data type. This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%. Only the weights of the linear operators within transformers blocks are quantized. Symmetric group-wise quantization is applied, in which a linear scaling per group maps the INT4 and floating point representations of the quantized weights. The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library. Quantization is performed with 10% damping factor, group-size as 64 and 512 sequences sampled from [LLM Compression Calibration](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration). ## Creation This model was created by using the [llm-compressor](https://github.com/vllm-project/llm-compressor) library as presented in the code snipet below. ```python from transformers import AutoTokenizer from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot from llmcompressor.modifiers.quantization import GPTQModifier from compressed_tensors.quantization import QuantizationArgs, QuantizationType, QuantizationStrategy from datasets import load_dataset import random model_id = "HuggingFaceTB/SmolLM-135M-Instruct" num_samples = 512 max_seq_len = 4096 tokenizer = AutoTokenizer.from_pretrained(model_id) preprocess_fn = lambda example: {"text": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n{text}".format_map(example)} dataset_name = "neuralmagic/LLM_compression_calibration" dataset = load_dataset(dataset_name, split="train") ds = dataset.shuffle().select(range(num_samples)) ds = ds.map(preprocess_fn) examples = [ tokenizer( example["text"], padding=False, max_length=max_seq_len, truncation=True, ) for example in ds ] # recipe = "w4a16_nohead_recipe.yaml" recipe = GPTQModifier( targets="Linear", scheme="W4A16", ignore=["lm_head"], dampening_frac=0.1, ) model = SparseAutoModelForCausalLM.from_pretrained( model_id, device_map="auto", trust_remote_code=True ) print(model) oneshot( model=model, dataset=ds, recipe=recipe, max_seq_length=max_seq_len, num_calibration_samples=num_samples, oneshot_device="cuda:1,2,3", ) model_name = model_id.split("/")[-1] model.save_pretrained(f"{model_name}-quantized.w4a16") tokenizer.save_pretrained(f"{model_name}-quantized.w4a16") ``` ## Evaluation The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [sparseML](https://github.com/neuralmagic/sparseml) engine, using the following command: ``` lm_eval \ --model sparseml \ --model_args pretrained=nm-testing/SmolLM-1.7B-Instruct-quantized.w4a16,dtype=bfloat16,max_legth=2048,add_bos_token=True,parallelize=True \ --tasks openllm \ --batch_size auto ``` ### Accuracy #### Open LLM Leaderboard evaluation scores <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>SmolLM-135M-Instruct </strong> </td> <td><strong>SmolLM-135M-Instruct-quantized.w4a16(this model)</strong> </td> <td><strong>Recovery</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>26.220 </td> <td>25.202 </td> <td>96.12% </td> </tr> <tr> <td>ARC Challenge (25-shot) </td> <td>29.948 </td> <td>30.034 </td> <td>100.29% </td> </tr> <tr> <td>GSM-8K (5-shot, strict-match) </td> <td>1.289 </td> <td>1.971 </td> <td>152.91% </td> </tr> <tr> <td>Hellaswag (10-shot) </td> <td>41.41 </td> <td>40.81 </td> <td>98.55% </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>50.039 </td> <td>53.591 </td> <td>107.10% </td> </tr> <tr> <td>TruthfulQA (0-shot) </td> <td>40.38 </td> <td>39.87 </td> <td>98.74% </td> </tr> <tr> <td><strong>Average</strong> </td> <td><strong>31.55</strong> </td> <td><strong>31.91</strong> </td> <td><strong>101.16%</strong> </td> </tr> </table>
nm-testing/SmolLM-135M-Instruct-quantized.w4a16
nm-testing
2024-10-09T20:00:02Z
7
0
null
[ "safetensors", "llama", "text-generation", "conversational", "en", "arxiv:2210.17323", "license:apache-2.0", "compressed-tensors", "region:us" ]
text-generation
2024-08-23T15:38:56Z
--- language: - en pipeline_tag: text-generation license: apache-2.0 --- # SmolLM-135M-Instruct-quantized.w4a16 ## Model Overview - **Model Architecture:** SmolLM-135M-Instruct - **Input:** Text - **Output:** Text - **Model Optimizations:** - **Weight quantization:** INT4 - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M), this models is intended for assistant-like chat. - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. - **Release Date:** 8/23/2024 - **Version:** 1.0 - **License(s)**: [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) - **Model Developers:** Neural Magic Quantized version of [SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M). It achieves an average score of 31.91 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 31.55. ### Model Optimizations This model was obtained by quantizing the weights of [SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M) to INT4 data type. This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 75%. Only the weights of the linear operators within transformers blocks are quantized. Symmetric group-wise quantization is applied, in which a linear scaling per group maps the INT4 and floating point representations of the quantized weights. The [GPTQ](https://arxiv.org/abs/2210.17323) algorithm is applied for quantization, as implemented in the [llm-compressor](https://github.com/vllm-project/llm-compressor) library. Quantization is performed with 10% damping factor, group-size as 64 and 512 sequences sampled from [LLM Compression Calibration](https://huggingface.co/datasets/neuralmagic/LLM_compression_calibration). ## Creation This model was created by using the [llm-compressor](https://github.com/vllm-project/llm-compressor) library as presented in the code snipet below. ```python from transformers import AutoTokenizer from llmcompressor.transformers import SparseAutoModelForCausalLM, oneshot from llmcompressor.modifiers.quantization import GPTQModifier from compressed_tensors.quantization import QuantizationArgs, QuantizationType, QuantizationStrategy from datasets import load_dataset import random model_id = "HuggingFaceTB/SmolLM-135M-Instruct" num_samples = 512 max_seq_len = 4096 tokenizer = AutoTokenizer.from_pretrained(model_id) preprocess_fn = lambda example: {"text": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n{text}".format_map(example)} dataset_name = "neuralmagic/LLM_compression_calibration" dataset = load_dataset(dataset_name, split="train") ds = dataset.shuffle().select(range(num_samples)) ds = ds.map(preprocess_fn) examples = [ tokenizer( example["text"], padding=False, max_length=max_seq_len, truncation=True, ) for example in ds ] # recipe = "w4a16_nohead_recipe.yaml" recipe = GPTQModifier( targets="Linear", scheme="W4A16", ignore=["lm_head"], dampening_frac=0.1, ) model = SparseAutoModelForCausalLM.from_pretrained( model_id, device_map="auto", trust_remote_code=True ) print(model) oneshot( model=model, dataset=ds, recipe=recipe, max_seq_length=max_seq_len, num_calibration_samples=num_samples, oneshot_device="cuda:1,2,3", ) model_name = model_id.split("/")[-1] model.save_pretrained(f"{model_name}-quantized.w4a16") tokenizer.save_pretrained(f"{model_name}-quantized.w4a16") ``` ## Evaluation The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [sparseML](https://github.com/neuralmagic/sparseml) engine, using the following command: ``` lm_eval \ --model sparseml \ --model_args pretrained=nm-testing/SmolLM-1.7B-Instruct-quantized.w4a16,dtype=bfloat16,max_legth=2048,add_bos_token=True,parallelize=True \ --tasks openllm \ --batch_size auto ``` ### Accuracy #### Open LLM Leaderboard evaluation scores <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>SmolLM-135M-Instruct </strong> </td> <td><strong>SmolLM-135M-Instruct-quantized.w4a16(this model)</strong> </td> <td><strong>Recovery</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>26.220 </td> <td>25.202 </td> <td>96.12% </td> </tr> <tr> <td>ARC Challenge (25-shot) </td> <td>29.948 </td> <td>30.034 </td> <td>100.29% </td> </tr> <tr> <td>GSM-8K (5-shot, strict-match) </td> <td>1.289 </td> <td>1.971 </td> <td>152.91% </td> </tr> <tr> <td>Hellaswag (10-shot) </td> <td>41.41 </td> <td>40.81 </td> <td>98.55% </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>50.039 </td> <td>53.591 </td> <td>107.10% </td> </tr> <tr> <td>TruthfulQA (0-shot) </td> <td>40.38 </td> <td>39.87 </td> <td>98.74% </td> </tr> <tr> <td><strong>Average</strong> </td> <td><strong>31.55</strong> </td> <td><strong>31.91</strong> </td> <td><strong>101.16%</strong> </td> </tr> </table>