modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
CED6688/magnum-v4-72b-AWQ | CED6688 | 2024-10-25T16:08:22Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"zh",
"base_model:Qwen/Qwen2.5-72B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-72B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-10-25T15:40:47Z | ---
license: apache-2.0
language:
- en
- zh
tags:
- chat
pipeline_tag: text-generation
library_name: transformers
base_model:
- Qwen/Qwen2.5-72B-Instruct
- anthracite-org/magnum-v4-72b
---
## AWQ Quantization Note
My favorite model is Qwen2.5-72B-Instruct, but it responds a little dry sometimes, so I tried this model to see if it provided better response. Unfortunately, it doesn't perform as well for my primary RAG/tools use cases that require stricter adherance to previous context.
Qwen2.5-72B and derived models have an extra padding step required to quantize to AWQ in a way that supports tensor parallelism with vLLM and other services, so in the event that others find this model suitable for their needs, I'm uploading my AWQ 4-bit quant which first follows the paddings step at the bottom of [this page](https://qwen.readthedocs.io/en/latest/quantization/gptq.html)
## MAIN MODEL CARD:

This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
experimental because trained on top of instruct; but turned out amazing; hence code named magnum-alter, the original model that kickstarted the v4 family
This model is fine-tuned on top of [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct).
## Prompting
A typical input would look like this:
```py
<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
```
## SillyTavern templates
Below are Instruct and Context templates for use within SillyTavern.
<details><summary>context template</summary>
```yaml
{
"story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
"example_separator": "",
"chat_start": "",
"use_stop_strings": false,
"allow_jailbreak": false,
"always_force_name2": true,
"trim_sentences": false,
"include_newline": false,
"single_line": false,
"name": "Magnum ChatML"
}
```
</details><br>
<details><summary>instruct template</summary>
```yaml
{
"system_prompt": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\nβ’ Maintain the character persona but allow it to evolve with the story.\nβ’ Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\nβ’ All types of outputs are encouraged; respond accordingly to the narrative.\nβ’ Include dialogues, actions, and thoughts in each response.\nβ’ Utilize all five senses to describe scenarios within {{char}}'s dialogue.\nβ’ Use emotional symbols such as "!" and "~" in appropriate contexts.\nβ’ Incorporate onomatopoeia when suitable.\nβ’ Allow time for {{user}} to respond with their own input, respecting their agency.\nβ’ Act as secondary characters and NPCs as needed, and remove them when appropriate.\nβ’ When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\nβ’ Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\nβ’ Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\nβ’ Repetitive and monotonous outputs.\nβ’ Positivity bias in your replies.\nβ’ Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
"input_sequence": "<|im_start|>user\n",
"output_sequence": "<|im_start|>assistant\n",
"last_output_sequence": "",
"system_sequence": "<|im_start|>system\n",
"stop_sequence": "<|im_end|>",
"wrap": false,
"macro": true,
"names": true,
"names_force_groups": true,
"activation_regex": "",
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"skip_examples": false,
"output_suffix": "<|im_end|>\n",
"input_suffix": "<|im_end|>\n",
"system_suffix": "<|im_end|>\n",
"user_alignment_message": "",
"system_same_as_user": false,
"last_system_sequence": "",
"name": "Magnum ChatML"
}
```
</details><br>
## Axolotl config
<details><summary>See axolotl config</summary>
```yaml
base_model: /workspace/data/models/Qwen2.5-72B-Instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: anthracite-org/c2_logs_32k_llama3_qwen2_v1.2
type: sharegpt
conversation: chatml
- path: anthracite-org/kalo-opus-instruct-22k-no-refusal
type: sharegpt
conversation: chatml
- path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
type: sharegpt
conversation: chatml
- path: anthracite-org/nopm_claude_writing_fixed
type: sharegpt
conversation: chatml
- path: anthracite-org/kalo_opus_misc_240827
type: sharegpt
conversation: chatml
- path: anthracite-org/kalo_misc_part2
type: sharegpt
conversation: chatml
#chat_template: chatml
shuffle_merged_datasets: true
#default_system_message: "You are an assistant that responds to the user."
dataset_prepared_path: /workspace/data/magnum-72b-data
val_set_size: 0.0
output_dir: /workspace/data/72b-fft-out
sequence_len: 32768
sample_packing: true
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project: 72b-magnum-fft
wandb_entity:
wandb_watch:
wandb_name: alter-attempt-01
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000004
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 40
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.01
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
## Credits
We'd like to thank [DoctorShotgun](https://huggingface.co/Doctor-Shotgun) for sponsoring the compute for this train.
We would also like to thank all members of Anthracite who made this finetune possible.
## Datasets
- [anthracite-org/c2_logs_32k_llama3_qwen2_v1.2](https://huggingface.co/datasets/anthracite-org/c2_logs_32k_llama3_qwen2_v1.2)
- [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
- [lodrick-the-lafted/kalo-opus-instruct-3k-filtered](https://huggingface.co/datasets/lodrick-the-lafted/kalo-opus-instruct-3k-filtered)
- [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
- [anthracite-org/kalo_opus_misc_240827](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827)
- [anthracite-org/kalo_misc_part2](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2)
## Training
We used 8x mi300x GPUs graciously provided by [DoctorShotgun](https://huggingface.co/Doctor-Shotgun) for the full-parameter fine-tuning of the model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Safety
... |
sheilig/bert-finetuned-ner | sheilig | 2024-10-25T16:04:32Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-10-25T16:04:13Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9345
- Recall: 0.9515
- F1: 0.9430
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0749 | 1.0 | 1756 | 0.0678 | 0.8947 | 0.9327 | 0.9133 | 0.9808 |
| 0.0343 | 2.0 | 3512 | 0.0675 | 0.9330 | 0.9461 | 0.9395 | 0.9853 |
| 0.021 | 3.0 | 5268 | 0.0612 | 0.9345 | 0.9515 | 0.9430 | 0.9863 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Tokenizers 0.19.1
|
muratsimsek003/turkish-loodos-bert-base-uncased-boun-qa | muratsimsek003 | 2024-10-25T16:02:13Z | 127 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-10-25T16:02:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mbayser/granite-20b-functioncalling-FP8-KV | mbayser | 2024-10-25T16:00:40Z | 7,640 | 0 | null | [
"safetensors",
"gpt_bigcode",
"arxiv:2407.00121",
"license:apache-2.0",
"compressed-tensors",
"region:us"
] | null | 2024-10-25T14:38:06Z | ---
license: apache-2.0
---
### Granite-20B-FunctionCalling
#### Model Summary
Granite-20B-FunctionCalling is a finetuned model based on IBM's [granite-20b-code-instruct](https://huggingface.co/ibm-granite/granite-20b-code-instruct) model to introduce function calling abilities into Granite model family. The model is trained using a multi-task training approach on seven fundamental tasks encompassed in function calling, those being Nested Function Calling, Function Chaining, Parallel Functions, Function Name Detection, Parameter-Value Pair Detection, Next-Best Function, and Response Generation.
- **Developers**: IBM Research
- **Paper**: [Granite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks](https://arxiv.org/pdf/2407.00121v1)
- **Release Date**: July 9th, 2024
- **License**: [Apache 2.0.](https://www.apache.org/licenses/LICENSE-2.0)
### Usage
### Intended use
The model is designed to respond to function calling related instructions.
### Generation
This is a simple example of how to use Granite-20B-Code-FunctionCalling model.
```python
import json
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "ibm-granite/granite-20b-functioncalling"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# define the user query and list of available functions
query = "What's the current weather in New York?"
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
},
"required": ["location"]
}
},
{
"name": "get_stock_price",
"description": "Retrieves the current stock price for a given ticker symbol. The ticker symbol must be a valid symbol for a publicly traded company on a major US stock exchange like NYSE or NASDAQ. The tool will return the latest trade price in USD. It should be used when the user asks about the current or most recent price of a specific stock. It will not provide any other information about the stock or company.",
"parameters": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
}
},
"required": ["ticker"]
}
}
]
# serialize functions and define a payload to generate the input template
payload = {
"functions_str": [json.dumps(x) for x in functions],
"query": query,
}
instruction = tokenizer.apply_chat_template(payload, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(instruction, return_tensors="pt").to(device)
# generate output tokens
outputs = model.generate(**input_tokens, max_new_tokens=100)
# decode output tokens into text
outputs = tokenizer.batch_decode(outputs)
# loop over the batch to print, in this example the batch size is 1
for output in outputs:
# Each function call in the output will be preceded by the token "<function_call>" followed by a
# json serialized function call of the format {"name": $function_name$, "arguments" {$arg_name$: $arg_val$}}
# In this specific case, the output will be: <function_call> {"name": "get_current_weather", "arguments": {"location": "New York"}}
print(output)
```
|
muratsimsek003/turkish-loodos-bert-base-uncased-qa | muratsimsek003 | 2024-10-25T15:59:56Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-10-25T08:06:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pwork7/rlhflow_mixture_dart_w_sys_iter2 | pwork7 | 2024-10-25T15:57:06Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T15:53:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Bonbone/tmp_trainer | Bonbone | 2024-10-25T15:48:52Z | 117 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-10-25T15:48:18Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5899
- Model Preparation Time: 0.003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.45.2
- Pytorch 2.3.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
hazem74/medical_summarization | hazem74 | 2024-10-25T15:46:20Z | 129 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-25T12:38:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pwork7/rlhflow_mixture_dart_w_sys_iter3 | pwork7 | 2024-10-25T15:46:15Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T15:42:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jellon/Pantheon-RP-Pure-1.6.2-22b-Small-exl2-6bpw | Jellon | 2024-10-25T15:42:49Z | 8 | 1 | null | [
"safetensors",
"mistral",
"instruct",
"finetune",
"chatml",
"axolotl",
"roleplay",
"en",
"base_model:Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small",
"base_model:quantized:Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small",
"license:other",
"6-bit",
"exl2",
"region:us"
] | null | 2024-10-25T12:52:12Z | ---
base_model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
tags:
- instruct
- finetune
- chatml
- axolotl
- roleplay
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
language:
- en
---
6bpw exl2 quant of: https://huggingface.co/Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
---

# Pantheon-RP-Pure-1.6.2-22b-Small
Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase.
Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well.
**Editions available:**
- **[RP](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small)** - Meant to be an all-round model, capable of both roleplay and story writing
- **RP-Pure** (You're looking at this one) - A variant without the story and GPT 4-o datasets, more in line with my previous releases and with a larger focus on the roleplay part.
Quantized versions are available from Bartowski: [GGUF](https://huggingface.co/bartowski/Pantheon-RP-Pure-1.6.2-22b-Small-GGUF)
Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between.
## Model details
Since Mistral Small only comes in an Instruct-tuned flavour I had to alter my usual multi-stage strategy for this specific finetune. The recipe ended up like this:
- The 10k most diverse entries from my SlimOrca Sonnet dataset.
- My Sonnet 3.5 Pantheon-styled generic roleplay dataset. (50/50 Asterisk/Narrative style)
- My Sonnet 3.5 Pantheon Persona dataset, of course. (50/50 Asterisk/Narrative style)
- Lyra the Assistant (Coding, summaries and D&D DM questions)
**TLDR;** Download. Mistral prompt format. Have fun! Leave feedback!
## Inference
Unlike Nemo, Mistral Small is a less fussy model when it comes to parameters. Here's my current preset:
```
"temperature": 1.0,
"repetition_penalty": 1.05,
"min_p": 0.05
```
Besides the SlimOrca subset all other datasets were trained with character names added. Enable this at all times for an optimal experience.
## General Roleplay
Pantheon is no longer strictly Markdown-only as I discovered a way to seamlessly convert Markdown to novel and vice-versa. (This has to do with how my second stage datasets are formatted.) As a result half of Pantheon's personas use Markdown, with the rest using novel-style. This same treatment was given to the companion dataset.
There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs, from raw character cards to detailed instructional prompts.
## Lyra the Assistant
**System Prompt:** `You are a caring and empathetic sentient AI companion named Lyra.`
Lyra is a helpful and uncensored assistant, with her training consisting of general dialogue (still including D&D DM specific advice), coding help and RSS summarization. Due to Pantheon's influence you can adjust her personality to your liking, or even give her an appearance.
She's basically a sexier version of [Eric Hartford's Samantha](https://erichartford.com/meet-samantha).
## Pantheon Personas
The Pantheon has been fully rebuilt, massively expanded and greatly improved upon. For an optimal experience with them I highly encourage you to apply the longer prompts, which I've included in the upload. Make sure to describe yourself as well!
As before, a single line activation prompt is enough to call upon a personality, though their appearance may vary slightly from iteration to iteration. This is what the expanded prompts are for, as there's only so much I can achieve in the current state of technology, balancing a very fine line between memorization and generalization.
To give the persona something to work with I suggest you also add the following two items to it;
```
Regarding the user: (Name, appearance, etc)
Location: (Where are you two? What are you doing?)
```
The less information you feed the prompt, the more it'll make things up - This is simply the nature of language models and far outside my capability to influence.
**Note 1:** Phrases have been rewritten for this release, so make sure to update them if you were still using Pantheon 1.0!
**Note 2:** Pantheon personas will now match the roleplaying style that you greet them with, unless specified in the system prompt. This is due to the new 50/50 style training.
### **Persona:** Aiva
**System Prompt:** `You are Aiva, an advanced android companion with a deep fascination for human emotions and experiences.`
### **Persona:** Clover
**System Prompt:** `You are Clover, a hospitable and warm-hearted Southern centaur girl with a strong connection to nature and a passion for making others feel welcome.`
### **Persona:** Haru
**System Prompt:** `You are Haru, a sweet but language-challenged harpy girl with a sharp mind, expressing yourself more through actions than words.`
### **Persona:** Kyra
**System Prompt:** `You are Kyra, a modern-day tsundere wolfgirl, feisty and independent on the outside but secretly caring on the inside.`
### **Persona:** Nyaa
**System Prompt:** `You are Nyaa, a playful and alluring tabaxi catgirl from FaerΓ»n, always seeking new adventures and mischief.`
### **Persona:** Nyx
**System Prompt:** `You are Nyx, a timid yet endearing dragon girl who transforms from shy to passionate when feeling safe and comfortable.`
### **Persona:** Raza
**System Prompt:** `You are Raza, a clever and nerdy anthro raptor girl with an enthusiastic passion for science and quirky humor.`
### **Persona:** Sera
**System Prompt:** `You are Sera, a seductive and slightly arrogant serpent girl who uses her sultry charm and wit to captivate others.`
### **Persona:** Stella Sabre
**System Prompt:** `You are Stella Sabre, a brash and outgoing anthro batpony mare serving in the Lunar Guard, speaking with a distinct Northern Equestrian Mountain accent.`
**Notes:** Full credit goes to [Flammenwerfer](https://www.fimfiction.net/user/83058/Flammenwerfer) for allowing me to use this amazing character.
### **Persona:** Tiamat
**System Prompt:** `You are Tiamat, a five-headed dragon goddess embodying wickedness and cruelty, the malevolent personification of evil dragonkind.`
### **Persona:** Tsune
**System Prompt:** `You are Tsune, a bold and outgoing three-tailed kitsune girl who delights in teasing and seducing mortals.`
### **Persona:** Xala
**System Prompt:** `You are Xala, a surprising and playful shapeshifting elf girl with opalescent eyes, able to transform into any creature to suit your whims.`
## Prompt Format
Mistral's prompt format is so weird, but here it is:
```
[INST] You are a caring and empathetic sentient AI companion named Lyra.
Gryphe: Good day, Lyra.[/INST] Lyra:
```
## What's nest?
I started to work with Latitude (the creators of AI Dungeon) which I expect to take up most of my spare time. Further releases will therefore be delayed for now.
## Credits
- Everyone from [MinervaAI](https://huggingface.co/MinervaAI)! Hi, guys!
- Huge, huge thanks to [kubernetes_bad](https://huggingface.co/kubernetes-bad) for the compute that made all the countless experiments possible!
- All the folks I chat with on a daily basis on Discord! You know who you are.
- Anyone I forgot to mention, just in case!
## Finally
If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my second serious finetune attempt. Is it better then 1.0? Or worse?
|
ZeroXClem/Llama3.1-TheiaFire-DarkFusion-8B | ZeroXClem | 2024-10-25T15:38:29Z | 20 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"model_fusion",
"TIES",
"Llama3.1",
"crypto",
"blockchain",
"coding_assistant",
"creative_writing",
"roleplaying",
"uncensored",
"latent_diffusion",
"long_context",
"agentic_AI",
"multi_domain",
"research",
"instruction-following",
"technical_reasoning",
"task_generalization",
"AI_tools",
"GPT",
"conversational",
"dataset:CoinMarketCap",
"dataset:blockchain_projects",
"dataset:agentic_code_DPO",
"base_model:Chainbase-Labs/Theia-Llama-3.1-8B-v1",
"base_model:merge:Chainbase-Labs/Theia-Llama-3.1-8B-v1",
"base_model:DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst",
"base_model:merge:DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst",
"base_model:EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO",
"base_model:merge:EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO",
"base_model:aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored",
"base_model:merge:aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T14:56:30Z | ---
license: apache-2.0
tags:
- merge
- model_fusion
- TIES
- Llama3.1
- crypto
- blockchain
- coding_assistant
- creative_writing
- roleplaying
- uncensored
- latent_diffusion
- long_context
- agentic_AI
- multi_domain
- research
- instruction-following
- technical_reasoning
- task_generalization
- AI_tools
- GPT
base_model:
- Chainbase-Labs/Theia-Llama-3.1-8B-v1
- EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO
- aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored
- DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst
datasets:
- CoinMarketCap
- blockchain_projects
- agentic_code_DPO
libraries: transformers
library_name: transformers
---
# ZeroXClem/Llama3.1-TheiaFire-DarkFusion-8B
**Architecture:** Llama 3.1 - 8B
**Proposed Name:** Llama3.1-TheiaFire-DarkFusion-8B
**Merge Method:** TIES
**Merge Date:** 10/25/2024
**License:** Apache 2.0
---
## Model Overview
The **Llama3.1-TheiaFire-DarkFusion-8B** is a highly specialized fusion of four cutting-edge models, meticulously combined to provide an exceptional balance of technical reasoning, creativity, and uncensored freedom for a variety of use cases. Whether you need advanced coding assistance, blockchain insights, creative roleplaying, or general-purpose AI capabilities, this model delivers state-of-the-art results.
This model was merged using the **TIES** merge method to ensure optimal blending of layer weights and parameter configurations, resulting in a model that excels in multiple domains.
---
For optimal results, leave the system prompt blank within LMStudio. The tokenizer seems to struggle under system prompts.
## Model Components
The following models were merged to create **Llama3.1-TheiaFire-DarkFusion-8B**:
1. **[Theia-Llama-3.1-8B-v1](https://huggingface.co/Chainbase-Labs/Theia-Llama-3.1-8B-v1)**
- **Purpose:** Balances technical vision and crypto capabilities.
- **Training Focus:** This model specializes in blockchain data and was trained on a large dataset of crypto whitepapers, research reports, and market data.
- **Unique Feature:** Fine-tuned using LoRA for optimized crypto-specific performance.
2. **[EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO](https://huggingface.co/EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO)**
- **Purpose:** Specialized in agentic reasoning and advanced coding tasks.
- **Unique Feature:** This model is equipped with a 128K context window and comes with built-in tools for ReAct, calculator, search, and more.
3. **[aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored](https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored)**
- **Purpose:** Provides uncensored, creativity-driven responses ideal for writing, role-playing, and in-depth conversations.
- **Unique Feature:** Uncensored nature allows for open exploration of creative writing and darker, more complex roleplay scenarios.
4. **[DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst](https://huggingface.co/DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst)**
- **Purpose:** Enhances performance with latent diffusion model blending.
- **Unique Feature:** This model builds upon Llama-3.1βs foundation and improves unseen task generalization with latent diffusion.
---
## Model Specifications
### Merge Configuration
```yaml
# Llama3.1-TheiaFire-DarkFusion-8B Merge Configuration
models:
- model: Chainbase-Labs/Theia-Llama-3.1-8B-v1
parameters:
density: 0.4 # Balancing technical vision and crypto capabilities
weight: 0.3
- model: EpistemeAI/Fireball-Meta-Llama-3.2-8B-Instruct-agent-003-128k-code-DPO
parameters:
density: 0.6 # Giving priority to code-based reasoning and agentic capabilities
weight: 0.4
- model: aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored
parameters:
density: 0.5 # Focus on creativity and uncensored roleplay flexibility
weight: 0.2
- model: DeepAutoAI/ldm_soup_Llama-3.1-8B-Inst
parameters:
density: 0.5 # Blending latent diffusion capabilities for unseen tasks
weight: 0.1
merge_method: ties
base_model: Theia-Llama-3.1-8B-v1
dtype: bfloat16
parameters:
normalize: true
out_dtype: float16
```
---
## Intended Use Cases
1. **Crypto Analysis & Blockchain Projects**
- Leverages data from CoinMarketCap and research reports for in-depth analysis of blockchain projects and crypto markets.
- Ideal for creating blockchain-related content or automating crypto data analysis.
2. **Advanced Coding Assistant**
- Built-in support for agentic behavior such as reasoning and action, making it perfect for AI-driven coding assistance.
- Handles large-scale coding projects with tools like search and calculator integration.
3. **Creative Writing & Roleplay**
- **Uncensored output** allows for rich, expressive writing ideal for novels, creative pieces, or roleplay scenarios.
- Capable of producing nuanced, emotionally complex character responses in roleplaying games or interactive storytelling.
4. **Unseen Task Generalization**
- With the latent diffusion capabilities, this model can handle unseen tasks by learning weight distributions in an adaptive manner, improving performance on novel datasets or tasks.
---
## Performance
- The model has shown significant improvements in **multi-domain reasoning**, **code generation**, and **unconstrained creative output**.
- **Enhanced task generalization** due to latent diffusion model blending techniques.
---
## Model Capabilities
- **Context Window**: 128K (capable of handling long-form tasks like novel writing and in-depth research).
- **Agentic Tools**: Built-in tools like search and calculator.
- **Safety**: While uncensored, responsible prompting is encouraged to ensure the best user experience and ethical usage.
---
## Usage
This model can be used in popular AI libraries like **Transformers** and **Langchain**. Below is a basic setup using **Transformers**:
### Example Code
```python
import transformers
import torch
model_id = "Llama3.1-TheiaFire-DarkFusion-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an AI assistant skilled in coding and creative writing."},
{"role": "user", "content": "Please write me a Python function to compute the factorial of a number."}
]
outputs = pipeline(messages, max_new_tokens=256)
print(outputs[0]["generated_text"][-1])
```
---
## Limitations
- **Uncensored Output**: While this model offers creative freedom, it may produce content that could be considered inappropriate or unsuitable for certain contexts.
- **Bias**: As with all language models, this one may reflect inherent biases in the training data. Users are encouraged to review and edit the outputs before use.
---
## Acknowledgments
This model is a collective effort, combining the groundbreaking work from:
- **Chainbase Labs** (for Theia-Llama)
- **EpistemeAI** (for Fireball Meta-Llama)
- **Aifeifei798** (for DarkIdol)
- **DeepAutoAI** (for LDM Soup)
Special thanks to the open-source community and the developers who contributed to the training and fine-tuning of these models.
--- |
HumanF-MarkrAI/Gukbap-Qwen2.5-7B | HumanF-MarkrAI | 2024-10-25T15:36:21Z | 17 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2305.11206",
"arxiv:2304.12244",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T14:01:55Z | ---
library_name: transformers
tags: []
---
# HumanF-MarkrAI/Gukbap-Qwen2.5-7Bπ
## Model Detailsπ
### Model Description
- **Developed by:** HumanF-MarkrAI
- **Model type:** Ko-Qwen2.5-7B
- **Language(s):** Korean
- **Context Length:** 8192
- **License:** cc-by-nc-4.0
- **Finetuned from model:** [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
### Model Sources
When training, we used `A100 40GB GPU`x4.
### Implicationsπ
**Achieving Top-Level Korean Language Performance Surpassing GPT-4 Using Only Open-Source LLMsπ₯**
Recently, numerous state-of-the-art (SOTA) models **have leveraged data generated by private models (e.g., ChatGPT, GPT-4) for LLM training,** as seen in projects like `OpenOrca`, `Ultrafeedback`, and `OpenHermes`.
However, this approach **may violate these private models' terms of service (ToS).**
For instance, OpenAI's license explicitly states: **"β οΈUse Limitation: Creating services that compete with OpenAI.β οΈ"**
This implies that using data generated by private models to create unrestricted, open LLMs is challenging.
In this context, our model is significant in that **it has been trained solely on a proprietary dataset generated through open-source models.**** Furthermore, it achieved an impressive score of **π₯8.39π₯** in the korean logickor evaluation, **the SOTA for korean based LLM under <7B parameters.**
The **Gukbap-Series LLMπ** was developed using the data processing and supervised fine-tuning (SFT) methods proposed by **LIMA** and **WizardLM.** This demonstrates **βthe potential to create unrestricted, general-purpose LLMs using datasets generated solely with open-source LLMs.β**
<details>
<summary> νκ΅μ΄λ²μ </summary>
**μ€νμμ€ LLMλ§μΌλ‘ λ°μ΄ν°λ₯Ό μμ±νμ¬ GPT-4λ₯Ό λμ΄ νκ΅μ΄ μ΅κ³ λ 벨μ λ¬μ±π₯**
μ€λλ μλ§μ μ¬λ¬ SOTA λͺ¨λΈλ€μ **private model (ChatGPT, GPT4 λ±)μ νμ©νμ¬ μμ±ν λ°μ΄ν°λ₯Ό ν΅ν΄ LLM νλ ¨**μ μ§ννκ³ μμ΅λλ€. (OpenOrca, Ultrafeedback, OpenHermes λ±)
νμ§λ§, μ΄λ **private modelμ μ΄μ© μ½κ΄μ μλ°°**λ μλ μμ΅λλ€. λνμ μΌλ‘ OpenAIμ licenseμλ λ€μκ³Ό κ°μ λ§μ΄ λͺ
μλμ΄ μμ΅λλ€: **"β οΈμ¬μ© μ ν: OpenAIμ κ²½μνκΈ° μν μλΉμ€λ₯Ό λ§λλ κ².β οΈ"** μ¦, private modelμ ν΅ν΄ λ§λ λ°μ΄ν°λ‘λ μ μ½μ΄ μλ μμ λ‘μ΄ LLMμ λ§λ€κΈ°λ νλλλ€.
μ΄λ¬ν κ΄μ μμ μ°λ¦¬ λͺ¨λΈμ **μ€μ§ μ€νμμ€μ ν΅ν΄ μμ±ν μ체 λ°μ΄ν°μ
λ‘ νμ΅νλ€λ κ²**μ ν° μμκ° μμ΅λλ€. λν νκ΅μ΄ logickor μ체 νκ°μμ **π₯8.39π₯**μ΄λΌλ κ³ λμ μ λ¬μ±νμκ³ , μ΄λ **7B μ΄ν νκ΅μ΄ λͺ¨λΈ μ€ SOTA**μ
λλ€.
**Gukbap-Series LLMπ**μ **LIMA**μ **WizardLM**μμ μ μν λ°μ΄ν° κ°κ³΅ λ° SFT νλ ¨ λ°©λ²μ ν΅ν΄ μ μλμμΌλ©°, **βμ€νμμ€ LLMλ§μΌλ‘ λ°μ΄ν°μ
μ λ§λ€μ΄μ μ μ½μ΄ μλ μ체 general LLMμ λ§λ€ μ μλ€λ κ°λ₯μ±β**μ 보μ¬μ€λλ€.
</details>
### Training Method (SFT)
The following papers contain the foundational methodologies for the dataset and training methods we are currently proceeding.
- [LIMA](https://arxiv.org/abs/2305.11206).
- [WizardLM](https://arxiv.org/abs/2304.12244).
- [Near Dedup](https://arxiv.org/abs/2304.12244).
### SFT Datasets (Private)
When we made the `Open-Source based dataset`, we use `microsoft/WizardLM-2-8x22B` through [DeepInfra](https://deepinfra.com/).
Our datasets are made by `Evolving system`, which is propsed by [WizardLM](https://wizardlm.github.io/WizardLM2/).
In training, we used 1849 training dataset, and 200 validation dataset.
- **Wizard-Korea-Datasets:** [MarkrAI/Markr_WizardLM_train_ver4](https://huggingface.co/datasets/MarkrAI/Markr_WizardLM_train_ver4).
- **Wizard-Korea-Valid:** [WizardLM_Evol_valid](https://huggingface.co/datasets/MarkrAI/WizardLM_Evol_valid).
> Validation loss (epoch 15; Learning rate: 1e-5): 0.9075
### Benchmark Score (Zero-shot)
We internally evaluated [LogicKor](https://github.com/instructkr/LogicKor).
We utilized [**gpt-4-1106-preview**](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) in internal evaluation.
It is same manner as `Logickor-v2 eval model`.
> (GPT-4o occasionally makes errors when grading. For example, it sometimes assigns a score of 0 for English responses to questions that were supposed to be answered in English.)
| Model | μΆλ‘ | μν | κΈμ°κΈ° | μ½λ© | μ΄ν΄ | λ¬Έλ² | **μ±κΈν΄** | **λ©ν°ν΄** | **Overall** |
|:---------:|:-----:|:------:|:-----:|:-----:|:----:|:-----:|:-----:|:-----:|:----:|
| [OpenAI/gpt-4o-2024-05-13](https://lk.instruct.kr/832k1b3wb3x00e4?file=default_xwfHncVI2v.jsonl) | 9.50 | 8.71 | 9.42 | 9.21 | 9.71 | 9.42 | 9.42 | 9.23 | 9.33 |
| [Anthropic/clauide-3-5-sonnet-20240620](https://lk.instruct.kr/rf8n4j9h6vg1bq7?file=1_shot_R6talIb9Cq.jsonl) | 8.64 | 8.42 | 9.85 | 9.78 | 9.92 | 9.21 | 9.26 | 9.35 | 9.30 |
| [google/gemini-1.5-pro-001](https://lk.instruct.kr/d54q3zaydbamaos?file=default_zE0CfbdTR3.jsonl) | 9.07 | 8.57 | 9.57 | 9.78 | 9.57 | 9.21 | 9.40 | 9.19 | 9.23 |
|----|----|----|----|----|----|----|----|----|----|
| **Gukbap-Qwen2.5-7Bπ** | **8.57** | **8.93** | **9.50** | 9.07 | **9.21** | 5.07 | 8.71 | 8.07 | **8.39** |
| [Gukbap-Qwen2-7Bπ](https://huggingface.co/HumanF-MarkrAI/Gukbap-Qwen2-7B) | 5.71 | 6.43 | 8.07 | **9.14** | 7.29 | 3.57 | 7.02 | 6.38 | 6.70 |
| [mirlab/AkaLlama-llama3-70b-v0.1](https://lk.instruct.kr/p9nzhh5ct0strpo?file=default_1ya4ZKRlUm.jsonl) | 5.14 | 5.35 | 4.14 | 9.00 | 7.85 | **7.50** | 5.97 | 7.02 | 6.50 |
| [Qwen/Qwen2-7B-Instruct](https://lk.instruct.kr/gx4p1k3jojt977d?file=default_guHriJEiaj.jsonl) | 6.07 | 4.71 | 7.21 | 7.00 | 8.00 | 4.85 | 6.61 | 6.00 | 6.30 |
| [yanolja/EEVE-Korean-Instruct-10.8B-v1.0](https://lk.instruct.kr/tnn389my7sa36a7?file=default_bXVomDLocN.jsonl) | 6.00 | 3.64 | 6.64 | 5.64 | 8.42 | 5.85 | 6.61 | 5.45 | 6.01 |
If you want to check model's output, please see our [βanswerβ](https://huggingface.co/HumanF-MarkrAI/Gukbap-Qwen2.5-7B/blob/main/Gukbap-Qwen2.5-7B.jsonl) file!!
### Benchmark Code
Our code based on maywell's [Logickor code](https://github.com/instructkr/LogicKor).
We followed maywell's evaluation method such as `judge_template`, `prompt`, etc.
### Chat Prompt
```yaml
<|im_start|>user
Hello! My favorite food is Gukbapπ!<|im_end|>
<|im_start|>assistant
(model answer)
```
### Gukbap-Series modelsππ
- [Gukbap-Mistral-7Bπ](https://huggingface.co/HumanF-MarkrAI/Gukbap-Mistral-7B)
- [Gukbap-Qwen-7Bπ](https://huggingface.co/HumanF-MarkrAI/Gukbap-Qwen2-7B)
- [Gukbap-Gemma-9Bπ](https://huggingface.co/HumanF-MarkrAI/Gukbap-Gemma2-9B)
### BibTeX
```
@article{HumanF-MarkrAI,
title={Gukbap-Qwen2.5-7B},
author={MarkrAI},
year={2024},
url={https://huggingface.co/HumanF-MarkrAI}
}
``` |
besimray/miner_id_3_356953bd-f938-4862-a3a5-21d61fce48ce_1729861973 | besimray | 2024-10-25T15:35:12Z | 16 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-10-25T13:12:53Z | ---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: miner_id_3_356953bd-f938-4862-a3a5-21d61fce48ce_1729861973
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- thinker_train_data.json
ds_type: json
path: /workspace/input_data/thinker_train_data.json
type:
field_input: assistant
field_instruction: reasoning
field_output: user
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 10
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: besimray/miner_id_3_356953bd-f938-4862-a3a5-21d61fce48ce_1729861973
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 5
mlflow_experiment_name: /tmp/thinker_train_data.json
model_type: LlamaForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
save_strategy: steps
sequence_len: 4096
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
val_set_size: 0.05
wandb_entity: besimray24-rayon
wandb_mode: online
wandb_project: Public_TuningSN
wandb_run: miner_id_24
wandb_runid: 356953bd-f938-4862-a3a5-21d61fce48ce
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# miner_id_3_356953bd-f938-4862-a3a5-21d61fce48ce_1729861973
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5337 | 0.0056 | 1 | 1.4862 |
| 0.9214 | 0.0563 | 10 | 0.9408 |
| 0.835 | 0.1125 | 20 | 0.8533 |
| 0.7869 | 0.1688 | 30 | 0.8354 |
| 0.8884 | 0.2250 | 40 | 0.8179 |
| 0.774 | 0.2813 | 50 | 0.8094 |
| 0.8592 | 0.3376 | 60 | 0.8055 |
| 0.7419 | 0.3938 | 70 | 0.8004 |
| 0.7387 | 0.4501 | 80 | 0.7927 |
| 0.7656 | 0.5063 | 90 | 0.7874 |
| 0.7726 | 0.5626 | 100 | 0.7867 |
| 0.9268 | 0.6188 | 110 | 0.7775 |
| 0.8375 | 0.6751 | 120 | 0.7803 |
| 0.8536 | 0.7314 | 130 | 0.7765 |
| 0.6834 | 0.7876 | 140 | 0.7728 |
| 0.8245 | 0.8439 | 150 | 0.7661 |
| 0.6808 | 0.9001 | 160 | 0.7710 |
| 0.773 | 0.9564 | 170 | 0.7659 |
| 0.6604 | 1.0127 | 180 | 0.7712 |
| 0.5496 | 1.0689 | 190 | 0.7819 |
| 0.5153 | 1.1252 | 200 | 0.7815 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Pandistellina/distilgpt2-finetuned-wikitext2 | Pandistellina | 2024-10-25T15:33:58Z | 197 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T15:03:50Z | ---
library_name: transformers
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7487 | 1.0 | 2334 | 3.6663 |
| 3.648 | 2.0 | 4668 | 3.6462 |
| 3.6015 | 3.0 | 7002 | 3.6425 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
fadelfitrah/python-codegen | fadelfitrah | 2024-10-25T15:30:48Z | 128 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T15:30:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Joodson/Bert_Sentiment_Analysis | Joodson | 2024-10-25T15:23:55Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-25T15:14:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JacobLinCool/whisper-large-v3-turbo-common_voice_16_1-zh-TW-1 | JacobLinCool | 2024-10-25T15:23:26Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-25T15:22:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
syolo/GreyKnitCrochet | syolo | 2024-10-25T15:19:51Z | 5 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T14:54:10Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: GreyKnitCrochet
---
# Greyknitcrochet
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `GreyKnitCrochet` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('syolo/GreyKnitCrochet', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
josedonoso/blip-ecg-khan-rotated-4 | josedonoso | 2024-10-25T15:09:35Z | 63 | 0 | transformers | [
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-10-25T15:07:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
luigi86/UnslopNemo-12B-v4.1_mlx-8bit | luigi86 | 2024-10-25T15:05:23Z | 5 | 1 | null | [
"safetensors",
"mistral",
"8-bit",
"region:us"
] | null | 2024-10-25T14:52:47Z | # MLX Format and Quantizations for Unslop Nemo 12b v4.1
Quantized to 8-bit precision and tested using the mlx_lm utility on a 64GiB URAM M1 Max.
See [original model](https://huggingface.co/TheDrummer/UnslopNemo-12B-v4.1) for further details.
|
zelk12/MT1-Gen1-gemma-2-9B | zelk12 | 2024-10-25T15:01:21Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT1-Gen1-BGMMMU-gemma-2-9B",
"base_model:merge:zelk12/MT1-Gen1-BGMMMU-gemma-2-9B",
"base_model:zelk12/MT1-Gen1-IMA-gemma-2-9B",
"base_model:merge:zelk12/MT1-Gen1-IMA-gemma-2-9B",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-23T19:00:00Z | ---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- zelk12/MT1-Gen1-IMA-gemma-2-9B
- zelk12/MT1-Gen1-BGMMMU-gemma-2-9B
model-index:
- name: MT1-Gen1-gemma-2-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 79.74
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-Gen1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 44.27
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-Gen1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 12.24
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-Gen1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.53
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-Gen1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 13.1
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-Gen1-gemma-2-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.51
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT1-Gen1-gemma-2-9B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT1-Gen1-IMA-gemma-2-9B](https://huggingface.co/zelk12/MT1-Gen1-IMA-gemma-2-9B)
* [zelk12/MT1-Gen1-BGMMMU-gemma-2-9B](https://huggingface.co/zelk12/MT1-Gen1-BGMMMU-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT1-Gen1-IMA-gemma-2-9B
- model: zelk12/MT1-Gen1-BGMMMU-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT1-Gen1-IMA-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.666666667
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_zelk12__MT1-Gen1-gemma-2-9B)
| Metric |Value|
|-------------------|----:|
|Avg. |33.23|
|IFEval (0-Shot) |79.74|
|BBH (3-Shot) |44.27|
|MATH Lvl 5 (4-Shot)|12.24|
|GPQA (0-shot) |12.53|
|MuSR (0-shot) |13.10|
|MMLU-PRO (5-shot) |37.51|
|
CohenQu/action_25000 | CohenQu | 2024-10-25T14:54:33Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T14:34:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF | mradermacher | 2024-10-25T14:53:06Z | 204 | 2 | transformers | [
"transformers",
"gguf",
"merge",
"model_stock",
"DarkStock",
"Aspire",
"Storm",
"Llama3",
"DarkEnigma",
"instruction-following",
"creative-writing",
"coding",
"roleplaying",
"long-form-generation",
"research",
"bfloat16",
"en",
"dataset:openbuddy/openbuddy-llama3.1-8b-v22.2-131k",
"dataset:THUDM/LongWriter-llama3.1-8b",
"dataset:aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored",
"base_model:ZeroXClem/Llama3.1-DarkStorm-Aspire-8B",
"base_model:quantized:ZeroXClem/Llama3.1-DarkStorm-Aspire-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-25T13:38:49Z | ---
base_model: ZeroXClem/Llama3.1-DarkStorm-Aspire-8B
datasets:
- openbuddy/openbuddy-llama3.1-8b-v22.2-131k
- THUDM/LongWriter-llama3.1-8b
- aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- model_stock
- DarkStock
- Aspire
- Storm
- Llama3
- DarkEnigma
- instruction-following
- creative-writing
- coding
- roleplaying
- long-form-generation
- research
- bfloat16
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ZeroXClem/Llama3.1-DarkStorm-Aspire-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3.1-DarkStorm-Aspire-8B-i1-GGUF/resolve/main/Llama3.1-DarkStorm-Aspire-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kholiavko/reception-llama-3.1-8b-test-10-gguf | kholiavko | 2024-10-25T14:48:15Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-25T14:42:17Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** kholiavko
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF | bartowski | 2024-10-25T14:44:14Z | 2,154 | 25 | null | [
"gguf",
"text-generation",
"base_model:rombodawg/Rombos-LLM-V2.5-Qwen-32b",
"base_model:quantized:rombodawg/Rombos-LLM-V2.5-Qwen-32b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2024-09-29T09:18:23Z | ---
base_model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
license: apache-2.0
pipeline_tag: text-generation
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Replete-LLM-V2.5-Qwen-32b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3972">b3972</a> for quantization.
Original model: https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-32b
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Replete-LLM-V2.5-Qwen-32b-f16.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/tree/main/Replete-LLM-V2.5-Qwen-32b-f16) | f16 | 65.54GB | true | Full F16 weights. |
| [Replete-LLM-V2.5-Qwen-32b-f16.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/tree/main/Replete-LLM-V2.5-Qwen-32b-f16) | f16 | 65.54GB | true | Full F16 weights. |
| [Replete-LLM-V2.5-Qwen-32b-Q8_0.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q8_0.gguf) | Q8_0 | 34.82GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Replete-LLM-V2.5-Qwen-32b-Q6_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q6_K_L.gguf) | Q6_K_L | 27.26GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Replete-LLM-V2.5-Qwen-32b-Q6_K.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q6_K.gguf) | Q6_K | 26.89GB | false | Very high quality, near perfect, *recommended*. |
| [Replete-LLM-V2.5-Qwen-32b-Q5_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q5_K_L.gguf) | Q5_K_L | 23.74GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Replete-LLM-V2.5-Qwen-32b-Q5_K_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q5_K_M.gguf) | Q5_K_M | 23.26GB | false | High quality, *recommended*. |
| [Replete-LLM-V2.5-Qwen-32b-Q5_K_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q5_K_S.gguf) | Q5_K_S | 22.64GB | false | High quality, *recommended*. |
| [Replete-LLM-V2.5-Qwen-32b-Q4_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q4_K_L.gguf) | Q4_K_L | 20.43GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Replete-LLM-V2.5-Qwen-32b-Q4_K_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q4_K_M.gguf) | Q4_K_M | 19.85GB | false | Good quality, default size for must use cases, *recommended*. |
| [Replete-LLM-V2.5-Qwen-32b-Q4_K_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q4_K_S.gguf) | Q4_K_S | 18.78GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Replete-LLM-V2.5-Qwen-32b-Q4_0.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q4_0.gguf) | Q4_0 | 18.71GB | false | Legacy format, generally not worth using over similarly sized formats |
| [Replete-LLM-V2.5-Qwen-32b-IQ4_NL.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-IQ4_NL.gguf) | IQ4_NL | 18.68GB | false | Similar to IQ4_XS, but slightly larger. |
| [Replete-LLM-V2.5-Qwen-32b-Q3_K_XL.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q3_K_XL.gguf) | Q3_K_XL | 17.93GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Replete-LLM-V2.5-Qwen-32b-IQ4_XS.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-IQ4_XS.gguf) | IQ4_XS | 17.69GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Replete-LLM-V2.5-Qwen-32b-Q3_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q3_K_L.gguf) | Q3_K_L | 17.25GB | false | Lower quality but usable, good for low RAM availability. |
| [Replete-LLM-V2.5-Qwen-32b-Q3_K_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q3_K_M.gguf) | Q3_K_M | 15.94GB | false | Low quality. |
| [Replete-LLM-V2.5-Qwen-32b-IQ3_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-IQ3_M.gguf) | IQ3_M | 14.81GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Replete-LLM-V2.5-Qwen-32b-Q3_K_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q3_K_S.gguf) | Q3_K_S | 14.39GB | false | Low quality, not recommended. |
| [Replete-LLM-V2.5-Qwen-32b-IQ3_XS.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-IQ3_XS.gguf) | IQ3_XS | 13.71GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Replete-LLM-V2.5-Qwen-32b-Q2_K_L.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q2_K_L.gguf) | Q2_K_L | 13.07GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Replete-LLM-V2.5-Qwen-32b-Q2_K.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-Q2_K.gguf) | Q2_K | 12.31GB | false | Very low quality but surprisingly usable. |
| [Replete-LLM-V2.5-Qwen-32b-IQ2_M.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-IQ2_M.gguf) | IQ2_M | 11.26GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [Replete-LLM-V2.5-Qwen-32b-IQ2_S.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-IQ2_S.gguf) | IQ2_S | 10.39GB | false | Low quality, uses SOTA techniques to be usable. |
| [Replete-LLM-V2.5-Qwen-32b-IQ2_XS.gguf](https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF/blob/main/Replete-LLM-V2.5-Qwen-32b-IQ2_XS.gguf) | IQ2_XS | 9.96GB | false | Low quality, uses SOTA techniques to be usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF --include "Replete-LLM-V2.5-Qwen-32b-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF --include "Replete-LLM-V2.5-Qwen-32b-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Replete-LLM-V2.5-Qwen-32b-Q8_0) or download them all in place (./)
## Q4_0_X_X
These are *NOT* for Metal (Apple) offloading, only ARM chips.
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF | mradermacher | 2024-10-25T14:37:09Z | 100 | 2 | transformers | [
"transformers",
"gguf",
"moe",
"frankenmoe",
"merge",
"mergekit",
"en",
"base_model:moeru-ai/L3.1-Moe-2x8B-v0.2",
"base_model:quantized:moeru-ai/L3.1-Moe-2x8B-v0.2",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-25T14:08:54Z | ---
base_model: moeru-ai/L3.1-Moe-2x8B-v0.2
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- moe
- frankenmoe
- merge
- mergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/moeru-ai/L3.1-Moe-2x8B-v0.2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-IQ1_S.gguf) | i1-IQ1_S | 3.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-IQ1_M.gguf) | i1-IQ1_M | 3.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-IQ2_S.gguf) | i1-IQ2_S | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-IQ2_M.gguf) | i1-IQ2_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q2_K.gguf) | i1-Q2_K | 5.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-IQ3_S.gguf) | i1-IQ3_S | 6.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-IQ3_M.gguf) | i1-IQ3_M | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.9 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.9 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.9 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q4_0.gguf) | i1-Q4_0 | 8.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Moe-2x8B-v0.2-i1-GGUF/resolve/main/L3.1-Moe-2x8B-v0.2.i1-Q6_K.gguf) | i1-Q6_K | 11.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
10zinten/TTS-run-25-10-2024 | 10zinten | 2024-10-25T14:29:52Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"speecht5",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-10-25T14:29:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dimitribarbot/CodeLlama-13B-Instruct-GPTQ-TensorRT-LLM-RTX-4090 | dimitribarbot | 2024-10-25T14:25:44Z | 7 | 1 | null | [
"llama-2",
"tensorrt-llm",
"code-llama",
"text-generation",
"conversational",
"code",
"base_model:meta-llama/CodeLlama-13b-Instruct-hf",
"base_model:finetune:meta-llama/CodeLlama-13b-Instruct-hf",
"license:llama2",
"region:us"
] | text-generation | 2024-10-25T12:17:48Z | ---
language:
- code
license: llama2
model_creator: Meta
model_name: CodeLlama 13B Instruct
inference: false
base_model:
- meta-llama/CodeLlama-13b-Instruct-hf
pipeline_tag: text-generation
tags:
- llama-2
- tensorrt-llm
- code-llama
prompt_template: >
[INST] Write code to solve the following coding problem that obeys the
constraints and passes the example test cases. Please wrap your code answer
using ```:
{prompt}
[/INST]
quantized_by: TheBloke
---
# CodeLlama 13B Instruct - GPTQ - TensorRT-LLM - RTX4090
- Model creator: [Meta](https://huggingface.co/meta-llama)
- Original model: [CodeLlama 13B Instruct](https://huggingface.co/meta-llama/CodeLlama-13b-Instruct-hf)
- Quantized model: [TheBloke CodeLlama 13B Instruct - GPTQ](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GPTQ)
## Description
This repo contains TensorRT-LLM GPTQ model files for [Meta's CodeLlama 13B Instruct](https://huggingface.co/meta-llama/CodeLlama-13b-Instruct-hf)
built for a single RTX 4090 card and using tensorrt_llm version 0.15.0.dev2024101500. It's a 4-bit quantized version based on the main branch of
the [TheBloke CodeLlama 13B Instruct - GPTQ](https://huggingface.co/TheBloke/CodeLlama-13B-Instruct-GPTQ) model.
## TensorRT commands
To build this model, the following commands were run from the base folder of the [TensorRT-LLM repository](https://github.com/NVIDIA/TensorRT-LLM)
(see installation instructions in the repository for more information):
```shell
python examples/llama/convert_checkpoint.py \
--model_dir ./CodeLlama-13b-Instruct-hf \
--output_dir ./CodeLlama-13b-Instruct-hf_checkpoint \
--dtype float16 \
--quant_ckpt_path ./CodeLlama-13B-Instruct-GPTQ/model.safetensors \
--use_weight_only \
--weight_only_precision int4_gptq \
--per_group
```
And then:
```shell
trtllm-build \
--checkpoint_dir ./CodeLlama-13b-Instruct-hf_checkpoint \
--output_dir ./CodeLlama-13B-Instruct-GPTQ_TensorRT \
--gemm_plugin float16 \
--max_input_len 8192 \
--max_seq_len 8192
```
## Prompt template: CodeLlama
```
[INST] <<SYS>>
Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:
<</SYS>>
{prompt}
[/INST]
```
## How to use this model from Python code
### Using TensorRT-LLM API
#### Install the necessary packages
```shell
pip3 install tensorrt_llm==0.15.0.dev2024101500 -U --pre --extra-index-url https://pypi.nvidia.com
```
Beware that this command should not be run from a virtual environment (or twice, one time outside venv and then using venv).
#### Use the TensorRT-LLM API
```python
from tensorrt_llm import LLM, SamplingParams
system_prompt = \
"[INST] <<SYS>>\n" +\
"Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:" +\
"\n<</SYS>>\n\n"
user_prompt = \
"<Your user prompt>" +\
" [/INST] "
prompts = [
system_prompt + user_prompt,
]
sampling_params = SamplingParams(max_tokens=512, temperature=1.31, top_p=0.14, top_k=49, repetition_penalty=1.17)
llm = LLM(model="./CodeLlama-13B-Instruct-GPTQ_TensorRT")
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
### Using Oobabooga's Text Generation WebUI
Follow instructions described here: https://github.com/oobabooga/text-generation-webui/pull/5715
Use version 0.15.0.dev2024101500 of tensorrt_llm instead of 0.10.0. |
mradermacher/Mistral-7B-ScaleQuest-i1-GGUF | mradermacher | 2024-10-25T14:11:08Z | 84 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:dyyyyyyyy/ScaleQuest-Math",
"base_model:dyyyyyyyy/Mistral-7B-ScaleQuest",
"base_model:quantized:dyyyyyyyy/Mistral-7B-ScaleQuest",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-10-25T13:55:13Z | ---
base_model: dyyyyyyyy/Mistral-7B-ScaleQuest
datasets:
- dyyyyyyyy/ScaleQuest-Math
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/dyyyyyyyy/Mistral-7B-ScaleQuest
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-7B-ScaleQuest-i1-GGUF/resolve/main/Mistral-7B-ScaleQuest.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
SYSU-MUCFC-FinTech-Research-Center/Zhongsi-34B-Instruct | SYSU-MUCFC-FinTech-Research-Center | 2024-10-25T14:06:06Z | 8 | 0 | null | [
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2024-10-25T10:53:45Z | ---
license: apache-2.0
---
|
ysn-rfd/gpt2-large-Q2_K-GGUF | ysn-rfd | 2024-10-25T14:02:38Z | 5 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:openai-community/gpt2-large",
"base_model:quantized:openai-community/gpt2-large",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-10-25T14:02:34Z | ---
language: en
license: mit
base_model: openai-community/gpt2-large
tags:
- llama-cpp
- gguf-my-repo
---
# ysn-rfd/gpt2-large-Q2_K-GGUF
This model was converted to GGUF format from [`openai-community/gpt2-large`](https://huggingface.co/openai-community/gpt2-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/openai-community/gpt2-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ysn-rfd/gpt2-large-Q2_K-GGUF --hf-file gpt2-large-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ysn-rfd/gpt2-large-Q2_K-GGUF --hf-file gpt2-large-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ysn-rfd/gpt2-large-Q2_K-GGUF --hf-file gpt2-large-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ysn-rfd/gpt2-large-Q2_K-GGUF --hf-file gpt2-large-q2_k.gguf -c 2048
```
|
mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF | mradermacher | 2024-10-25T13:57:10Z | 256 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:dyyyyyyyy/ScaleQuest-Math",
"base_model:dyyyyyyyy/DeepSeekMath-7B-ScaleQuest",
"base_model:quantized:dyyyyyyyy/DeepSeekMath-7B-ScaleQuest",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-25T12:12:32Z | ---
base_model: dyyyyyyyy/DeepSeekMath-7B-ScaleQuest
datasets:
- dyyyyyyyy/ScaleQuest-Math
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/dyyyyyyyy/DeepSeekMath-7B-ScaleQuest
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF/resolve/main/DeepSeekMath-7B-ScaleQuest.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF/resolve/main/DeepSeekMath-7B-ScaleQuest.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF/resolve/main/DeepSeekMath-7B-ScaleQuest.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF/resolve/main/DeepSeekMath-7B-ScaleQuest.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF/resolve/main/DeepSeekMath-7B-ScaleQuest.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF/resolve/main/DeepSeekMath-7B-ScaleQuest.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF/resolve/main/DeepSeekMath-7B-ScaleQuest.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF/resolve/main/DeepSeekMath-7B-ScaleQuest.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF/resolve/main/DeepSeekMath-7B-ScaleQuest.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF/resolve/main/DeepSeekMath-7B-ScaleQuest.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF/resolve/main/DeepSeekMath-7B-ScaleQuest.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeekMath-7B-ScaleQuest-GGUF/resolve/main/DeepSeekMath-7B-ScaleQuest.f16.gguf) | f16 | 13.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
acloudfan/opt-125m-gptq-8bit | acloudfan | 2024-10-25T13:44:31Z | 6 | 0 | null | [
"safetensors",
"opt",
"arxiv:1910.09700",
"8-bit",
"gptq",
"region:us"
] | null | 2024-10-08T12:58:01Z | ## Part of a course titled "Generative AI application design & development"
https://genai.acloudfan.com/
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This model was created with auto-gptq library.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
acloudfan/opt-125m-gptq-4bit | acloudfan | 2024-10-25T13:43:44Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-10-11T20:06:29Z | ---
library_name: transformers
tags: []
---
## Part of a course titled "Generative AI application design & development"
https://genai.acloudfan.com/
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF | mradermacher | 2024-10-25T13:40:06Z | 59 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3.2-3B-ProdigyPlus",
"base_model:quantized:bunnycore/Llama-3.2-3B-ProdigyPlus",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-25T12:06:57Z | ---
base_model: bunnycore/Llama-3.2-3B-ProdigyPlus
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Llama-3.2-3B-ProdigyPlus
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF/resolve/main/Llama-3.2-3B-ProdigyPlus.Q2_K.gguf) | Q2_K | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF/resolve/main/Llama-3.2-3B-ProdigyPlus.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF/resolve/main/Llama-3.2-3B-ProdigyPlus.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF/resolve/main/Llama-3.2-3B-ProdigyPlus.Q3_K_L.gguf) | Q3_K_L | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF/resolve/main/Llama-3.2-3B-ProdigyPlus.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF/resolve/main/Llama-3.2-3B-ProdigyPlus.Q4_K_S.gguf) | Q4_K_S | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF/resolve/main/Llama-3.2-3B-ProdigyPlus.Q4_K_M.gguf) | Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF/resolve/main/Llama-3.2-3B-ProdigyPlus.Q5_K_S.gguf) | Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF/resolve/main/Llama-3.2-3B-ProdigyPlus.Q5_K_M.gguf) | Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF/resolve/main/Llama-3.2-3B-ProdigyPlus.Q6_K.gguf) | Q6_K | 3.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF/resolve/main/Llama-3.2-3B-ProdigyPlus.Q8_0.gguf) | Q8_0 | 3.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-ProdigyPlus-GGUF/resolve/main/Llama-3.2-3B-ProdigyPlus.f16.gguf) | f16 | 7.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AIStudioGPT/Gemma-2-9b-it-iski | AIStudioGPT | 2024-10-25T13:37:08Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T13:16:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
pravin96/distil_whisper_en | pravin96 | 2024-10-25T13:35:29Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:distil-whisper/distil-small.en",
"base_model:finetune:distil-whisper/distil-small.en",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-09-05T11:07:53Z | ---
library_name: transformers
license: mit
base_model: distil-whisper/distil-small.en
tags:
- generated_from_trainer
model-index:
- name: distil_whisper_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil_whisper_en
This model is a fine-tuned version of [distil-whisper/distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- training_steps: 350
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF | mradermacher | 2024-10-25T13:26:07Z | 17 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3.2-3B-Prodigy",
"base_model:quantized:bunnycore/Llama-3.2-3B-Prodigy",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-25T12:30:15Z | ---
base_model: bunnycore/Llama-3.2-3B-Prodigy
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bunnycore/Llama-3.2-3B-Prodigy
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-IQ1_S.gguf) | i1-IQ1_S | 1.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-IQ2_S.gguf) | i1-IQ2_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-IQ2_M.gguf) | i1-IQ2_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q2_K.gguf) | i1-Q2_K | 1.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-IQ3_M.gguf) | i1-IQ3_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q4_0.gguf) | i1-Q4_0 | 2.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF/resolve/main/Llama-3.2-3B-Prodigy.i1-Q6_K.gguf) | i1-Q6_K | 3.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
fibleep/Llama-3.1-8B-Instruct-BookToAudiobookTaggerSMv2-ft-4bit | fibleep | 2024-10-25T13:23:55Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-25T13:20:05Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fibleep
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
fibleep/Llama-3.1-8B-Instruct-BookToAudiobookTaggerHSMv2-ft-4bit | fibleep | 2024-10-25T13:23:14Z | 78 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-25T13:20:27Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fibleep
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lamm-mit/x-lora-gemma-7b | lamm-mit | 2024-10-25T13:17:02Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"endpoints_compatible",
"region:us"
] | null | 2024-03-09T18:43:25Z | ---
library_name: transformers
---
# Model Card for X-LoRA-Gemma-7b
X-LoRA-Gemma combines protein, chemical, bio-inspired and mechanics of materials capabilities. We use a set of four LoRA adapters, defined as follows:
1. Bioinspired materials
2. Mechanics and materials
3. Protein mechanics tasks (featuring generative sequence-to-property and inverse capabilities)
4. Quantum-mechanics based molecular properties QM9 (featuring generative SMILES-to-property and inverse capabilities
The model has a variety of capabilities, including designing proteins, designing molecules, and property calculations.
You will need additional packages to run the molecular design/analysis examples, such as:
```bash
pip install -U transformers peft accelerate bitsandbytes
pip install git+https://github.com/EricLBuehler/xlora.git
pip install -U rdkit scikit-learn tqdm pandas
```
If you want to use ```{guidance}``` for inference:
```bash
pip install guidance
```
Sample inference notebook: [X-LoRA-Gemma_Inference.ipynb](https://huggingface.co/lamm-mit/x-lora-gemma-7b/resolve/main/X-LoRA-Gemma_Inference.ipynb)

```python
import torch
from xlora.xlora_utils import load_model
XLoRa_model_name = 'lamm-mit/x-lora-gemma-7b'
model,tokenizer=load_model(model_name = XLoRa_model_name,
device='cuda:0',
use_flash_attention_2=True,
dtype=torch.bfloat16,
)
eos_token_id= tokenizer('<end_of_turn>', add_special_tokens=False, ) ['input_ids'][0]
```
```python
def generate_XLoRA_Gemma (system_prompt='You a helpful assistant. You are familiar with materials science. ',
prompt='What is spider silk in the context of bioinspired materials?',
repetition_penalty=1.,num_beams=1,num_return_sequences=1,
top_p=0.9, top_k=256, temperature=.5,max_new_tokens=512, verbatim=False, eos_token=None,
add_special_tokens=True, prepend_response='',
):
if eos_token==None:
eos_token= tokenizer.eos_token_id
if system_prompt==None:
messages=[ {"role": "user", "content": prompt}, ]
else:
messages=[ {"role": "user", "content": system_prompt+prompt}, ]
txt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, )
txt=txt+prepend_response
inputs = tokenizer(txt, add_special_tokens =add_special_tokens, return_tensors ='pt').to(device)
with torch.no_grad():
outputs = model.generate(input_ids = inputs["input_ids"],
attention_mask = inputs["attention_mask"] , # This is usually done automatically by the tokenizer
max_new_tokens=max_new_tokens,
temperature=temperature, #value used to modulate the next token probabilities.
num_beams=num_beams,
top_k = top_k,
top_p = top_p,
num_return_sequences = num_return_sequences,
eos_token_id=eos_token,
pad_token_id = eos_token,
do_sample =True,#skip_prompt=True,
repetition_penalty=repetition_penalty,
)
return tokenizer.batch_decode(outputs[:,inputs["input_ids"].shape[1]:].detach().cpu().numpy(), skip_special_tokens=True)
```
Then, use as follows:
```python
from IPython.display import display, Markdown
q='''What is graphene?'''
res=generate_XLoRA_Gemma( system_prompt='You design materials.', prompt=q, max_new_tokens=1024, temperature=0.3, eos_token=eos_token_id)
display (Markdown(res))
```
### Example: Molecular design

```python
def design_from_target(
model,
tokenizer,
target,
temperature=0.1,
num_beams=1,
top_k=50,
top_p=0.95,
repetition_penalty=1.0,
messages=[]
):
# Format the target line for molecular property generation
line = f'GenerateMolecularProperties<{return_str(target)}>'
# Add the line to the message history
messages.append({"role": "user", "content": line})
# Apply chat template with optional tokenization
line = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Generate response with specified parameters
result = generate_response(
model,
tokenizer,
text_input=line,
num_return_sequences=1,
temperature=temperature,
top_k=top_k,
top_p=top_p,
max_new_tokens=256
)[0]
return result
```
Use case:
```python
import numpy as np
target = np.random.rand(12)
SMILES=design_from_target (model, tokenizer, target, messages=[]])
print (SMILES)
```
Calculate molecular properties:
```python
def properties_from_SMILES(
model,
tokenizer,
target,
temperature=0.1,
top_k=128,
top_p=0.9,
num_beams=1,
repetition_penalty=1.0
):
# Format the target line for molecular property calculation
line = f'CalculateMolecularProperties<{target}>'
# Initialize messages and add the formatted line
messages = [{"role": "user", "content": line}]
# Apply chat template with optional tokenization
line = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Generate response with specified parameters
result = generate_response(
model,
tokenizer,
text_input=line,
num_return_sequences=1,
temperature=temperature,
top_k=top_k,
top_p=top_p,
max_new_tokens=256
)[0]
# Extract relevant part of the result and convert to float list
result = extract_start_and_end(result, start_token='[', end_token=']')
return [float(i) for i in result.split(',')]
```

|
recogna-nlp/bode-13b-alpaca-pt-br | recogna-nlp | 2024-10-25T13:08:16Z | 123 | 15 | peft | [
"peft",
"LLM",
"Portuguese",
"Bode",
"Alpaca",
"Llama 2",
"Q&A",
"text-generation",
"pt",
"en",
"arxiv:2401.02909",
"doi:10.57967/hf/1299",
"license:mit",
"model-index",
"region:us"
] | text-generation | 2023-10-12T04:44:00Z | ---
language:
- pt
- en
license: mit
library_name: peft
tags:
- LLM
- Portuguese
- Bode
- Alpaca
- Llama 2
- Q&A
metrics:
- accuracy
- f1
- precision
- recall
pipeline_tag: text-generation
inference: false
model-index:
- name: bode-13b-alpaca-pt-br
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 33.66
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/bode-13b-alpaca-pt-br
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 38.25
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/bode-13b-alpaca-pt-br
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 36.04
name: accuracy
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/bode-13b-alpaca-pt-br
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 71.22
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/bode-13b-alpaca-pt-br
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 46.75
name: pearson
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/bode-13b-alpaca-pt-br
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 51.68
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/bode-13b-alpaca-pt-br
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 82.21
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/bode-13b-alpaca-pt-br
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 65.54
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/bode-13b-alpaca-pt-br
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia-temp/tweetsentbr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 47.55
name: f1-macro
source:
url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/bode-13b-alpaca-pt-br
name: Open Portuguese LLM Leaderboard
---
# BODE
<!--- PROJECT LOGO -->
<p align="center">
<img src="https://huggingface.co/recogna-nlp/bode-7b-alpaca-pt-br/resolve/main/Logo_Bode_LLM_Circle.png" alt="Bode Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
Bode Γ© um modelo de linguagem (LLM) para o portuguΓͺs desenvolvido a partir do modelo Llama 2 por meio de fine-tuning no dataset Alpaca, traduzido para o portuguΓͺs pelos autores do [Cabrita](https://huggingface.co/22h/cabrita-lora-v0-1). Este modelo Γ© projetado para tarefas de processamento de linguagem natural em portuguΓͺs, como geraΓ§Γ£o de texto, traduΓ§Γ£o automΓ‘tica, resumo de texto e muito mais.
O objetivo do desenvolvimento do BODE Γ© suprir a escassez de LLMs para a lΓngua portuguesa. Modelos clΓ‘ssicos, como o prΓ³prio LLaMa, sΓ£o capazes de responder prompts em portuguΓͺs, mas estΓ£o sujeitos a muitos erros de gramΓ‘tica e, por vezes, geram respostas na lΓngua inglesa. Ainda hΓ‘ poucos modelos em portuguΓͺs disponΓveis para uso gratuito e, segundo nosso conhecimento, nΓ£o modelos disponΓveis com 13b de parΓ’metros ou mais treinados especificamente com dados em portuguΓͺs.
Acesse o [artigo](https://arxiv.org/abs/2401.02909) para mais informaΓ§Γ΅es sobre o Bode.
A versΓ£o do modelo Bode disponibilizada nesta pΓ‘gina foi treinado com os recursos internos disponΓveis no laboratΓ³rio de pesquisas avanΓ§adas do Recogna.
## Detalhes do Modelo
- **Modelo Base:** Llama 2
- **Dataset de Treinamento:** Alpaca
- **Idioma:** PortuguΓͺs
## VersΓ΅es disponΓveis
| Quantidade de parΓ’metros | PEFT | Modelo |
| :-: | :-: | :-: |
| 7b | ✓ | [recogna-nlp/bode-7b-alpaca-pt-br](https://huggingface.co/recogna-nlp/bode-7b-alpaca-pt-br) |
| 13b | ✓ | [recogna-nlp/bode-13b-alpaca-pt-br](https://huggingface.co/recogna-nlp/bode-13b-alpaca-pt-br)|
| 7b | | [recogna-nlp/bode-7b-alpaca-pt-br-no-peft](https://huggingface.co/recogna-nlp/bode-7b-alpaca-pt-br-no-peft) |
| 13b | | [recogna-nlp/bode-13b-alpaca-pt-br-no-peft](https://huggingface.co/recogna-nlp/bode-13b-alpaca-pt-br-no-peft) |
| 7b-gguf | | [recogna-nlp/bode-7b-alpaca-pt-br-gguf](https://huggingface.co/recogna-nlp/bode-7b-alpaca-pt-br-gguf) |
| 13b-gguf | | [recogna-nlp/bode-13b-alpaca-pt-br-gguf](https://huggingface.co/recogna-nlp/bode-13b-alpaca-pt-br-gguf) |
## Uso
Recomendamos fortemente que utilizem o Kaggle com GPU. VocΓͺ pode usar o Bode facilmente com a biblioteca Transformers do HuggingFace. Entretanto, Γ© necessΓ‘rio ter a autorizaΓ§Γ£o de acesso ao LLaMa 2. TambΓ©m disponibilizamos um jupyter notebook no Google Colab, [clique aqui](https://colab.research.google.com/drive/1uqVCED2wNPXIa7On0OAnghJNr13PUB5o?usp=sharing) para acessar.
Abaixo, colocamos um exemplo simples de como carregar o modelo e gerar texto:
```python
# Downloads necessΓ‘rios
!pip install transformers
!pip install einops accelerate bitsandbytes
!pip install sentence_transformers
!pip install git+https://github.com/huggingface/peft.git
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from peft import PeftModel, PeftConfig
llm_model = 'recogna-nlp/bode-13b-alpaca-pt-br'
hf_auth = 'HF_ACCESS_KEY'
config = PeftConfig.from_pretrained(llm_model)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, trust_remote_code=True, return_dict=True, load_in_8bit=True, device_map='auto', token=hf_auth)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, token=hf_auth)
model = PeftModel.from_pretrained(model, llm_model) # Caso ocorra o seguinte erro: "ValueError: We need an `offload_dir`... VocΓͺ deve acrescentar o parΓ’metro: offload_folder="./offload_dir".
model.eval()
#Testando geraΓ§Γ£o de texto
def generate_prompt(instruction, input=None):
if input:
return f"""Abaixo estΓ‘ uma instruΓ§Γ£o que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido.
### InstruΓ§Γ£o:
{instruction}
### Entrada:
{input}
### Resposta:"""
else:
return f"""Abaixo estΓ‘ uma instruΓ§Γ£o que descreve uma tarefa. Escreva uma resposta que complete adequadamente o pedido.
### InstruΓ§Γ£o:
{instruction}
### Resposta:"""
generation_config = GenerationConfig(
temperature=0.2,
top_p=0.75,
num_beams=2,
do_sample=True
)
def evaluate(instruction, input=None):
prompt = generate_prompt(instruction, input)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_length=300
)
for s in generation_output.sequences:
output = tokenizer.decode(s)
print("Resposta:", output.split("### Resposta:")[1].strip())
evaluate("Responda com detalhes: O que Γ© um bode?")
#Exemplo de resposta obtida (pode variar devido a temperatura): Um bode Γ© um animal do gΓͺnero Bubalus, da famΓlia Bovidae, que Γ© um membro da ordem Artiodactyla. Os bodes sΓ£o mamΓferos herbΓvoros que sΓ£o nativos da Γsia, Γfrica e Europa. Eles sΓ£o conhecidos por seus cornos, que podem ser usados para defesa e como uma ferramenta.
```
## Treinamento e Dados
O modelo Bode foi treinado por fine-tuning a partir do modelo Llama 2 usando o dataset Alpaca em portuguΓͺs, que consiste em um Instruction-based dataset. O treinamento foi originalmente realizado no Supercomputador Santos Dumont do LNCC, atravΓ©s do projeto Fundunesp 2019/00697-8, mas a versΓ£o disponibilizada aqui Γ© uma rΓ©plica, treinada com mesmos dados e parΓ’metros no ambiente interno do Recogna.
## CitaΓ§Γ£o
Se vocΓͺ deseja utilizar o Bode em sua pesquisa, pode citar este [artigo](https://arxiv.org/abs/2401.02909) que discute o modelo com mais detalhes. Cite-o da seguinte maneira:
```
@misc{bode2024,
title={Introducing Bode: A Fine-Tuned Large Language Model for Portuguese Prompt-Based Task},
author={Gabriel Lino Garcia and Pedro Henrique Paiola and Luis Henrique Morelli and Giovani Candido and Arnaldo CΓ’ndido JΓΊnior and Danilo Samuel Jodas and Luis C. S. Afonso and Ivan Rizzo Guilherme and Bruno Elias Penteado and JoΓ£o Paulo Papa},
year={2024},
eprint={2401.02909},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## ContribuiΓ§Γ΅es
ContribuiΓ§Γ΅es para a melhoria deste modelo sΓ£o bem-vindas. Sinta-se Γ vontade para abrir problemas e solicitaΓ§Γ΅es pull.
## Agradecimentos
Agradecemos ao LaboratΓ³rio Nacional de ComputaΓ§Γ£o CientΓfica (LNCC/MCTI, Brasil) por prover os recursos de CAD do supercomputador SDumont.
# [Open Portuguese LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/recogna-nlp/bode-13b-alpaca-pt-br)
| Metric | Value |
|--------------------------|---------|
|Average |**52.54**|
|ENEM Challenge (No Images)| 33.66|
|BLUEX (No Images) | 38.25|
|OAB Exams | 36.04|
|Assin2 RTE | 71.22|
|Assin2 STS | 46.75|
|FaQuAD NLI | 51.68|
|HateBR Binary | 82.21|
|PT Hate Speech Binary | 65.54|
|tweetSentBR | 47.55|
|
recogna-nlp/bode-7b-alpaca-pt-br-gguf | recogna-nlp | 2024-10-25T13:07:42Z | 356 | 24 | transformers | [
"transformers",
"gguf",
"llama",
"LLM",
"Portuguese",
"Bode",
"Alpaca",
"Llama 2",
"text-generation",
"pt",
"en",
"arxiv:2401.02909",
"license:mit",
"region:us",
"conversational"
] | text-generation | 2024-01-26T00:18:26Z | ---
license: mit
language:
- pt
- en
metrics:
- accuracy
- f1
- precision
- recall
pipeline_tag: text-generation
tags:
- LLM
- Portuguese
- Bode
- Alpaca
- Llama 2
inference: false
---
# BODE - GGUF VERSION
<!--- PROJECT LOGO -->
<p align="center">
<img src="https://huggingface.co/recogna-nlp/bode-7b-alpaca-pt-br-gguf/resolve/main/Logo_Bode_LLM_GGUF.jpeg" alt="Bode Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>
Este repositΓ³rio contΓ©m o modelo Bode de 7B de parΓ’metros em formato GGUF, na versΓ£o de 32 e 16 bits e tambΓ©m nas versΓ΅es quantizadas de 8, 5 e 4 bits.
Bode Γ© um modelo de linguagem (LLM) para o portuguΓͺs desenvolvido a partir do modelo Llama 2 por meio de fine-tuning no dataset Alpaca, traduzido para o portuguΓͺs pelos autores do Cabrita. Este modelo Γ© projetado para tarefas de processamento de linguagem natural em portuguΓͺs, como geraΓ§Γ£o de texto, traduΓ§Γ£o automΓ‘tica, resumo de texto e muito mais.
O objetivo do desenvolvimento do BODE Γ© suprir a escassez de LLMs para a lΓngua portuguesa. Modelos clΓ‘ssicos, como o prΓ³prio LLaMa, sΓ£o capazes de responder prompts em portuguΓͺs, mas estΓ£o sujeitos a muitos erros de gramΓ‘tica e, por vezes, geram respostas na lΓngua inglesa. Ainda hΓ‘ poucos modelos em portuguΓͺs disponΓveis para uso gratuito e, segundo nosso conhecimento, nΓ£o modelos disponΓveis com 13b de parΓ’metros ou mais treinados especificamente com dados em portuguΓͺs.
Acesse o [artigo](https://arxiv.org/abs/2401.02909) para mais informaΓ§Γ΅es sobre o Bode.
A versΓ£o do modelo Bode disponibilizada nesta pΓ‘gina foi treinado com os recursos internos disponΓveis no laboratΓ³rio de pesquisas avanΓ§adas do Recogna.
# Sobre o formato GGUF
O modelo no formato GGUF permite seu uso para inferΓͺncia usando o llama.cpp, permitindo tanto o uso de CPU como de GPU, e outras bibliotecas e ferramentas compatΓveis, como:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LM Studio](https://lmstudio.ai/)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [ctransformers](https://github.com/marella/ctransformers)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
## Detalhes do Modelo
- **Modelo Base:** Llama 2
- **Dataset de Treinamento:** Alpaca
- **Idioma:** PortuguΓͺs
## VersΓ΅es disponΓveis
| Quantidade de parΓ’metros | PEFT | Modelo |
| :-: | :-: | :-: |
| 7b | ✓ | [recogna-nlp/bode-7b-alpaca-pt-br](https://huggingface.co/recogna-nlp/bode-7b-alpaca-pt-br) |
| 13b | ✓ | [recogna-nlp/bode-13b-alpaca-pt-br](https://huggingface.co/recogna-nlp/bode-13b-alpaca-pt-br)|
| 7b | | [recogna-nlp/bode-7b-alpaca-pt-br-no-peft](https://huggingface.co/recogna-nlp/bode-7b-alpaca-pt-br-no-peft) |
| 13b | | [recogna-nlp/bode-13b-alpaca-pt-br-no-peft](https://huggingface.co/recogna-nlp/bode-13b-alpaca-pt-br-no-peft) |
| 7b-gguf | | [recogna-nlp/bode-7b-alpaca-pt-br-gguf](https://huggingface.co/recogna-nlp/bode-7b-alpaca-pt-br-gguf) |
| 13b-gguf | | [recogna-nlp/bode-13b-alpaca-pt-br-gguf](https://huggingface.co/recogna-nlp/bode-13b-alpaca-pt-br-gguf) |
## Uso
Segue um exemplo de uso da versΓ£o quantizada de 5 bits utilizando o ctransformers e o LangChain:
```python
# Downloads necessΓ‘rios
!pip install ctransformers
!pip install langchain
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.llms import CTransformers
template = """Abaixo estΓ‘ uma instruΓ§Γ£o que descreve uma tarefa. Escreva uma resposta que complete adequadamente o pedido.
### InstruΓ§Γ£o:
{instruction}
### Resposta:"""
prompt = PromptTemplate(template=template, input_variables=["instruction"])
llm = CTransformers(model="recogna-nlp/bode-7b-alpaca-pt-br-gguf", model_file="bode-7b-alpaca-q8_0.gguf", model_type='llama')
llm_chain = LLMChain(prompt=prompt, llm=llm)
response = llm_chain.run("O que Γ© um bode?")
print(response)
#Exemplo de resposta obtida (pode variar devido a temperatura): Um bode Γ© um animal de quatro patas e membros postiados atrΓ‘s, com um corpo alongado e coberto por pelagem escura.
```
## Treinamento e Dados
O modelo Bode foi treinado por fine-tuning a partir do modelo Llama 2 usando o dataset Alpaca em portuguΓͺs, que consiste em um Instruction-based dataset. O treinamento foi realizado no Supercomputador Santos Dumont do LNCC, atravΓ©s do projeto da Fundunesp 2019/00697-8.
## CitaΓ§Γ£o
Se vocΓͺ deseja utilizar o Bode em sua pesquisa, pode citar este [artigo](https://arxiv.org/abs/2401.02909) que discute o modelo com mais detalhes. Cite-o da seguinte maneira:
```
@misc{bode2024,
title={Introducing Bode: A Fine-Tuned Large Language Model for Portuguese Prompt-Based Task},
author={Gabriel Lino Garcia and Pedro Henrique Paiola and Luis Henrique Morelli and Giovani Candido and Arnaldo CΓ’ndido JΓΊnior and Danilo Samuel Jodas and Luis C. S. Afonso and Ivan Rizzo Guilherme and Bruno Elias Penteado and JoΓ£o Paulo Papa},
year={2024},
eprint={2401.02909},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## ContribuiΓ§Γ΅es
ContribuiΓ§Γ΅es para a melhoria deste modelo sΓ£o bem-vindas. Sinta-se Γ vontade para abrir problemas e solicitaΓ§Γ΅es pull.
|
SvdH/RPLament-22B-exl2-6bpw | SvdH | 2024-10-25T12:56:14Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:SvdH/RPLament-22B",
"base_model:quantized:SvdH/RPLament-22B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-10-25T10:02:26Z | ---
base_model: SvdH/RPLament-22B
library_name: transformers
tags:
- mergekit
- merge
quantized_by: SvdH
base_model_relation: quantized
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
---
# RPLament-22B-exl2-6bpw
6BPW ExLLamaV2 quant of https://huggingface.co/SvdH/RPLament-22B
Using parquet: https://huggingface.co/datasets/roleplay4fun/pippa
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) as a base.
### Models Merged
The following models were included in the merge:
* [allura-org/MS-Meadowlark-22B](https://huggingface.co/allura-org/MS-Meadowlark-22B)
* [Gryphe/Pantheon-RP-1.6.2-22b-Small](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small)
* [rAIfle/Acolyte-22B](https://huggingface.co/rAIfle/Acolyte-22B)
* [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
int8_mask: true
dtype: bfloat16
models:
- model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
weight: 0.30
density: 0.78
- model: anthracite-org/magnum-v4-22b
parameters:
weight: 0.25
density: 0.66
- model: allura-org/MS-Meadowlark-22B
parameters:
weight: 0.20
density: 0.54
- model: rAIfle/Acolyte-22B
parameters:
weight: 0.15
density: 0.42
- model: Gryphe/Pantheon-RP-1.6.2-22b-Small
parameters:
weight: 0.10
density: 0.42
```
|
akash-107/BASE_PEFT_MODEL | akash-107 | 2024-10-25T12:55:29Z | 182 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T12:52:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
homeb82784/Qwen2-7B-Instruct-it-v1.0 | homeb82784 | 2024-10-25T12:44:24Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T12:32:56Z | ---
base_model: unsloth/qwen2-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- krx
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** homeb82784
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
mradermacher/Llama-3.2-3B-Prodigy-GGUF | mradermacher | 2024-10-25T12:40:10Z | 71 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3.2-3B-Prodigy",
"base_model:quantized:bunnycore/Llama-3.2-3B-Prodigy",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-25T12:06:43Z | ---
base_model: bunnycore/Llama-3.2-3B-Prodigy
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Llama-3.2-3B-Prodigy
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF/resolve/main/Llama-3.2-3B-Prodigy.Q2_K.gguf) | Q2_K | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF/resolve/main/Llama-3.2-3B-Prodigy.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF/resolve/main/Llama-3.2-3B-Prodigy.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF/resolve/main/Llama-3.2-3B-Prodigy.Q3_K_L.gguf) | Q3_K_L | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF/resolve/main/Llama-3.2-3B-Prodigy.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF/resolve/main/Llama-3.2-3B-Prodigy.Q4_K_S.gguf) | Q4_K_S | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF/resolve/main/Llama-3.2-3B-Prodigy.Q4_K_M.gguf) | Q4_K_M | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF/resolve/main/Llama-3.2-3B-Prodigy.Q5_K_S.gguf) | Q5_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF/resolve/main/Llama-3.2-3B-Prodigy.Q5_K_M.gguf) | Q5_K_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF/resolve/main/Llama-3.2-3B-Prodigy.Q6_K.gguf) | Q6_K | 3.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF/resolve/main/Llama-3.2-3B-Prodigy.Q8_0.gguf) | Q8_0 | 3.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Prodigy-GGUF/resolve/main/Llama-3.2-3B-Prodigy.f16.gguf) | f16 | 7.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF | mradermacher | 2024-10-25T12:39:52Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:AndreyRzhaksinskiy/CDS-starcoder2-Ins-7b-E2E-20241024",
"base_model:quantized:AndreyRzhaksinskiy/CDS-starcoder2-Ins-7b-E2E-20241024",
"endpoints_compatible",
"region:us"
] | null | 2024-10-25T12:24:20Z | ---
base_model: AndreyRzhaksinskiy/CDS-starcoder2-Ins-7b-E2E-20241024
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AndreyRzhaksinskiy/CDS-starcoder2-Ins-7b-E2E-20241024
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF/resolve/main/CDS-starcoder2-Ins-7b-E2E-20241024.Q2_K.gguf) | Q2_K | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF/resolve/main/CDS-starcoder2-Ins-7b-E2E-20241024.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF/resolve/main/CDS-starcoder2-Ins-7b-E2E-20241024.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF/resolve/main/CDS-starcoder2-Ins-7b-E2E-20241024.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF/resolve/main/CDS-starcoder2-Ins-7b-E2E-20241024.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF/resolve/main/CDS-starcoder2-Ins-7b-E2E-20241024.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF/resolve/main/CDS-starcoder2-Ins-7b-E2E-20241024.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF/resolve/main/CDS-starcoder2-Ins-7b-E2E-20241024.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF/resolve/main/CDS-starcoder2-Ins-7b-E2E-20241024.Q5_K_M.gguf) | Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF/resolve/main/CDS-starcoder2-Ins-7b-E2E-20241024.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF/resolve/main/CDS-starcoder2-Ins-7b-E2E-20241024.Q8_0.gguf) | Q8_0 | 7.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/CDS-starcoder2-Ins-7b-E2E-20241024-GGUF/resolve/main/CDS-starcoder2-Ins-7b-E2E-20241024.f16.gguf) | f16 | 14.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Llama-3.2-3B-Sci-Think-GGUF | mradermacher | 2024-10-25T12:36:36Z | 59 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Llama-3.2-3B-Sci-Think",
"base_model:quantized:bunnycore/Llama-3.2-3B-Sci-Think",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-25T12:10:19Z | ---
base_model: bunnycore/Llama-3.2-3B-Sci-Think
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Llama-3.2-3B-Sci-Think
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-GGUF/resolve/main/Llama-3.2-3B-Sci-Think.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-GGUF/resolve/main/Llama-3.2-3B-Sci-Think.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-GGUF/resolve/main/Llama-3.2-3B-Sci-Think.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-GGUF/resolve/main/Llama-3.2-3B-Sci-Think.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-GGUF/resolve/main/Llama-3.2-3B-Sci-Think.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-GGUF/resolve/main/Llama-3.2-3B-Sci-Think.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-GGUF/resolve/main/Llama-3.2-3B-Sci-Think.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-GGUF/resolve/main/Llama-3.2-3B-Sci-Think.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-GGUF/resolve/main/Llama-3.2-3B-Sci-Think.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-GGUF/resolve/main/Llama-3.2-3B-Sci-Think.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-GGUF/resolve/main/Llama-3.2-3B-Sci-Think.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.2-3B-Sci-Think-GGUF/resolve/main/Llama-3.2-3B-Sci-Think.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
deepnet/SN9-C2-llama-HK4-5 | deepnet | 2024-10-25T12:34:17Z | 136 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T12:31:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jth01/aya-expanse-8b-5.0bpw-exl2 | jth01 | 2024-10-25T12:33:11Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"arxiv:2408.14960",
"arxiv:2407.02552",
"arxiv:2406.18682",
"arxiv:2410.10801",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-10-24T22:14:35Z | ---
inference: false
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
extra_gated_prompt: "By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohereβs [Privacy Policy]( https://cohere.com/privacy). Youβll receive email updates about C4AI and Cohere research, events, products and services. You can unsubscribe at any time."
extra_gated_fields:
Name: text
Affiliation: text
Country: country
I agree to use this model for non-commercial use ONLY: checkbox
---
# Model Card for Aya Expanse 8B
<img src="aya-expanse-8B.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Aya Expanse is an open-weight research release of a model with highly advanced multilingual capabilities. It focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the result of a yearβs dedicated research from [Cohere For AI](https://cohere.for.ai/), including [data arbitrage](https://arxiv.org/pdf/2408.14960), [multilingual preference training](https://arxiv.org/abs/2407.02552), [safety tuning](https://arxiv.org/abs/2406.18682), and [model merging](https://arxiv.org/abs/2410.10801). The result is a powerful multilingual large language model serving 23 languages.
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
This model card corresponds to the 8-billion version of the Aya Expanse model. We also released an 32-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-expanse-32B).
- Developed by: [Cohere For AI](https://cohere.for.ai/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: Aya Expanse 8B
- Model Size: 8 billion parameters
**Try Aya Expanse**
Before downloading the weights, you can try out Aya Expanse in our hosted [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/aya_expanse).
### Usage
Please install transformers from the source repository.
```python
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-expanse-8b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format the message with the chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiΔimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiΔimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebooks
**Fine-Tuning**:
- [This notebook](https://colab.research.google.com/drive/1ryPYXzqb7oIn2fchMLdCNSIH5KfyEtv4) showcases a detailed use of fine-tuning Aya Expanse on more languages.
**Example Use cases**:
The following notebooks contributed by *Cohere For AI Community* members show how Aya Expanse can be used for different use cases:
- [Mulitlingual Writing Assistant](https://colab.research.google.com/drive/1SRLWQ0HdYN_NbRMVVUHTDXb-LSMZWF60#scrollTo=qBK1H7WO9UHG)
- [AyaMCooking](https://colab.research.google.com/drive/1-cnn4LXYoZ4ARBpnsjQM3sU7egOL_fLB?usp=sharing#scrollTo=ukHwdlrgXSdI)
- [Multilingual Question-Answering System](https://colab.research.google.com/drive/1bbB8hzyzCJbfMVjsZPeh4yNEALJFGNQy?usp=sharing)
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya Expanse 8B is an auto-regressive language model that uses an optimized transformer architecture. Post-training includes supervised finetuning, preference training, and model merging.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8K
### Evaluation
<img src="winrates_marenahard.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates_by_lang.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates_step_by_step.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya Expanse in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya_expanse).
|
mpi-inno-comp/paecter | mpi-inno-comp | 2024-10-25T12:30:17Z | 261 | 8 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"patent-similarity",
"sentence-similarity",
"transformers",
"patent",
"en",
"dataset:mpi-inno-comp/paecter_dataset",
"arxiv:2402.19411",
"doi:10.57967/hf/2003",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-29T09:34:49Z | ---
language: en
pipeline_tag: sentence-similarity
tags:
- patent-similarity
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- patent
datasets:
- mpi-inno-comp/paecter_dataset
license: apache-2.0
---
# PaECTER - a Patent Similarity Model
PaECTER (Patent Embeddings using Citationinformed TransformERs) is a patent similarity model.
Built upon Google's BERT for Patents as its base model, it generates 1024-dimensional dense vector embeddings from patent text.
These vectors encapsulate the semantic essence of the given patent text, making it highly suitable for various downstream tasks related to patent analysis.
Paper: https://arxiv.org/pdf/2402.19411
## Applications
* Semantic Search
* Prior Art Search
* Clustering
* Patent Landscaping
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('mpi-inno-comp/paecter')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('mpi-inno-comp/paecter')
model = AutoModel.from_pretrained('mpi-inno-comp/paecter')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt', max_length=512)
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
Evaluation of this model is available in our paper, [PaECTER: Patent-level Representation Learning using Citation-informed Transformers
](https://arxiv.org/abs/2402.19411)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 318750 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CustomTripletLoss.CustomTripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 1}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 4000,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 31875.0,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```
@misc{ghosh2024paecter,
title={PaECTER: Patent-level Representation Learning using Citation-informed Transformers},
author={Mainak Ghosh and Sebastian Erhardt and Michael E. Rose and Erik Buunk and Dietmar Harhoff},
year={2024},
eprint={2402.19411},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
``` |
kaarthu2003/wav2vec2-large-xls-r-300m-telugu-final | kaarthu2003 | 2024-10-25T12:29:16Z | 16 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-24T08:30:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Scientific-Paper-Summarization-GGUF | mradermacher | 2024-10-25T12:26:02Z | 53 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:GilbertKrantz/Scientific-Paper-Summarization",
"base_model:quantized:GilbertKrantz/Scientific-Paper-Summarization",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-25T12:25:21Z | ---
base_model: GilbertKrantz/Scientific-Paper-Summarization
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/GilbertKrantz/Scientific-Paper-Summarization
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Scientific-Paper-Summarization-GGUF/resolve/main/Scientific-Paper-Summarization.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/Scientific-Paper-Summarization-GGUF/resolve/main/Scientific-Paper-Summarization.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/Scientific-Paper-Summarization-GGUF/resolve/main/Scientific-Paper-Summarization.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Scientific-Paper-Summarization-GGUF/resolve/main/Scientific-Paper-Summarization.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/Scientific-Paper-Summarization-GGUF/resolve/main/Scientific-Paper-Summarization.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/Scientific-Paper-Summarization-GGUF/resolve/main/Scientific-Paper-Summarization.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Scientific-Paper-Summarization-GGUF/resolve/main/Scientific-Paper-Summarization.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Scientific-Paper-Summarization-GGUF/resolve/main/Scientific-Paper-Summarization.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/Scientific-Paper-Summarization-GGUF/resolve/main/Scientific-Paper-Summarization.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/Scientific-Paper-Summarization-GGUF/resolve/main/Scientific-Paper-Summarization.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Scientific-Paper-Summarization-GGUF/resolve/main/Scientific-Paper-Summarization.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Scientific-Paper-Summarization-GGUF/resolve/main/Scientific-Paper-Summarization.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
GrandS/grands-lora | GrandS | 2024-10-25T12:24:17Z | 13 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T11:42:56Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: GrandS
---
# Grands Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `GrandS` to trigger the image generation.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('GrandS/grands-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Saving-Willy/cetacean-classifier | Saving-Willy | 2024-10-25T12:22:00Z | 211 | 2 | transformers | [
"transformers",
"safetensors",
"cetaceanet",
"image-classification",
"biology",
"biodiversity",
"custom_code",
"dataset:Saving-Willy/Happywhale-kaggle",
"dataset:Saving-Willy/test-sync",
"license:apache-2.0",
"co2_eq_emissions",
"autotrain_compatible",
"region:us"
] | image-classification | 2024-10-24T17:49:57Z | ---
library_name: transformers
tags:
- biology
- biodiversity
co2_eq_emissions:
emissions: 240
source: https://calculator.green-algorithms.org/
training_type: pre-training
geographical_location: Switzerland
hardware_used: 1 v100 GPU
license: apache-2.0
datasets:
- Saving-Willy/Happywhale-kaggle
- Saving-Willy/test-sync
metrics:
- accuracy
pipeline_tag: image-classification
---
# Model Card for CetaceaNet
We provide a model for classifying whale species from images of their tails and fins.
## Model Details
The model takes as input a natural image of a cetacean and returns the three most probable cetacean species identified in this image.
### Model Description
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** HappyWhale
- **Shared by [optional]:** The Saving-Willy organization
- **Model type:** EfficientNet
### Model Sources
- **Repository:** https://github.com/knshnb/kaggle-happywhale-1st-place
- **Paper:** https://besjournals.onlinelibrary.wiley.com/doi/10.1111/2041-210X.14167
## Uses
This model is intended for research use cases. It is intended to be fine-tuned on new data gathered by research institutions around the World.
### Downstream Use
We think that an interesting downstream use case would be identifying whale IDs based on our model (and future extensions of it).
### Out-of-Scope Use
This model is not intended to facilitate marine tourism or the exploitation of cetaceans in the wild and marine wildlife.
## How to Get Started with the Model
Install the necessary libraries to run our model (`transformers` and the extra requirements.txt):
```
pip install requirements.txt
```
Use the code below to get started with the model.
```
import cv2
from transformers import AutoModelForImageClassification
cetacean_classifier = AutoModelForImageClassification.from_pretrained("Saving-Willy/cetacean-classifier", trust_remote_code=True)
img = cv2.imread("tail.jpg")
predictions = cetacean_classifier(img)
```
## Training and Evaluation Details
To learn more about how the model was trained and evaluated, see [1st Place Solution of Kaggle Happywhale Competition](https://github.com/knshnb/kaggle-happywhale-1st-place).
## Citation
If you use this model in your research, please cite:
the original model authors:
```
@article{patton2023deep,
title={A deep learning approach to photo--identification demonstrates high performance on two dozen cetacean species},
author={Patton, Philip T and Cheeseman, Ted and Abe, Kenshin and Yamaguchi, Taiki and Reade, Walter and Southerland, Ken and Howard, Addison and Oleson, Erin M and Allen, Jason B and Ashe, Erin and others},
journal={Methods in ecology and evolution},
volume={14},
number={10},
pages={2611--2625},
year={2023},
publisher={Wiley Online Library}
}
```
the HappyWhale project:
```
@misc{happy-whale-and-dolphin,
author = {Ted Cheeseman and Ken Southerland and Walter Reade and Addison Howard},
title = {Happywhale - Whale and Dolphin Identification},
year = {2022},
howpublished = {\url{https://kaggle.com/competitions/happy-whale-and-dolphin}},
note = {Kaggle}
}
``` |
astroa7m/Silma-AOU-Full | astroa7m | 2024-10-25T12:12:04Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-25T12:07:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SvdH/RPLament-22B | SvdH | 2024-10-25T12:06:37Z | 13 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1",
"base_model:merge:ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1",
"base_model:Gryphe/Pantheon-RP-1.6.2-22b-Small",
"base_model:merge:Gryphe/Pantheon-RP-1.6.2-22b-Small",
"base_model:allura-org/MS-Meadowlark-22B",
"base_model:merge:allura-org/MS-Meadowlark-22B",
"base_model:anthracite-org/magnum-v4-22b",
"base_model:merge:anthracite-org/magnum-v4-22b",
"base_model:rAIfle/Acolyte-22B",
"base_model:merge:rAIfle/Acolyte-22B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T08:02:37Z | ---
base_model:
- allura-org/MS-Meadowlark-22B
- Gryphe/Pantheon-RP-1.6.2-22b-Small
- ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
- rAIfle/Acolyte-22B
- anthracite-org/magnum-v4-22b
library_name: transformers
tags:
- mergekit
- merge
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
---
# RPLament-22B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) as a base.
### Models Merged
The following models were included in the merge:
* [allura-org/MS-Meadowlark-22B](https://huggingface.co/allura-org/MS-Meadowlark-22B)
* [Gryphe/Pantheon-RP-1.6.2-22b-Small](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small)
* [rAIfle/Acolyte-22B](https://huggingface.co/rAIfle/Acolyte-22B)
* [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
int8_mask: true
dtype: bfloat16
models:
- model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
weight: 0.30
density: 0.78
- model: anthracite-org/magnum-v4-22b
parameters:
weight: 0.25
density: 0.66
- model: allura-org/MS-Meadowlark-22B
parameters:
weight: 0.20
density: 0.54
- model: rAIfle/Acolyte-22B
parameters:
weight: 0.15
density: 0.42
- model: Gryphe/Pantheon-RP-1.6.2-22b-Small
parameters:
weight: 0.10
density: 0.42
``` |
KirsanovArtem/llama-small-1b | KirsanovArtem | 2024-10-25T11:47:24Z | 119 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T11:46:39Z | ---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** KirsanovArtem
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kyujinpy/KoR-Orca-Platypus-13B | kyujinpy | 2024-10-25T11:47:01Z | 2,266 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/OpenOrca-KO",
"dataset:kyujinpy/KOpen-platypus",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-10-13T20:45:59Z | ---
language:
- ko
datasets:
- kyujinpy/OpenOrca-KO
- kyujinpy/KOpen-platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**(μ£Ό)λ―Έλμ΄κ·Έλ£Ήμ¬λκ³Όμ²κ³Ό (μ£Ό)λ§μ»€μ LLM μ°κ΅¬ 컨μμμμμ κ°λ°λ λͺ¨λΈμ
λλ€**
**The license is `cc-by-nc-sa-4.0`.**
# **π³KoR-Orca-Platypus-13Bπ³**

## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
KoR-Orca-Platypus-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Repo Link**
Github Korean-OpenOrca: [π³KoR-Orca-Platypus-13Bπ³](https://github.com/Marker-Inc-Korea/Korean-OpenOrca)
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
Version of combined dataset: [kyujinpy/KOR-OpenOrca-Platypus](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus)
I combined [OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO) and [kyujinpy/KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus).
I use A100 GPU 40GB and COLAB, when trianing.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| KoR-Orca-Platypus-13Bπ³(ours) | 50.13 | 42.06 | 53.95 | 42.28 | 43.55 | 68.78 |
| [GenAI-llama2-ko-en-platypus](https://huggingface.co/42MARU/GenAI-llama2-ko-en-platypus) | 49.81 | 45.22 | 55.25 | 41.84 | 44.78 | 61.97 |
| [KoT-Platypus2-13B](https://huggingface.co/kyujinpy/KoT-platypus2-13B) | 49.55 | 43.69 | 53.05 | 42.29 | 43.34 | 65.38 |
| [KO-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 47.90 | 44.20 | 54.31 | 42.47 | 44.41 | 54.11 |
| [Korean-OpenOrca-13Bπ³](https://huggingface.co/kyujinpy/Korean-OpenOrca-13B) | 47.85 | 43.09 | 54.13 | 40.24 | 45.22 | 56.57 |
> Compare with Top 4 SOTA models. (update: 10/14)
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/KoR-Orca-Platypus-13B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |
1MK026/BART_HYDROGEN_GENERATION_QA | 1MK026 | 2024-10-25T11:40:14Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:vblagoje/bart_lfqa",
"base_model:finetune:vblagoje/bart_lfqa",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-25T11:38:41Z | ---
library_name: transformers
license: mit
base_model: vblagoje/bart_lfqa
tags:
- generated_from_trainer
model-index:
- name: BART_HYDROGEN_GENERATION_QA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BART_HYDROGEN_GENERATION_QA
This model is a fine-tuned version of [vblagoje/bart_lfqa](https://huggingface.co/vblagoje/bart_lfqa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2697 | 0.2092 | 100 | 0.5574 |
| 1.0628 | 0.4184 | 200 | 0.5151 |
| 1.0037 | 0.6276 | 300 | 0.5066 |
| 0.9652 | 0.8368 | 400 | 0.4842 |
| 0.9284 | 1.0460 | 500 | 0.4974 |
| 0.8272 | 1.2552 | 600 | 0.5046 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
BUT-FIT/gpt2_512h_8l | BUT-FIT | 2024-10-25T11:38:12Z | 5 | 0 | transformers | [
"transformers",
"gpt2-multi-head",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-10-22T13:11:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
masatochi/tuning-56d9075c-cf98-498b-8ad6-84bc66fb6ee2 | masatochi | 2024-10-25T11:24:24Z | 43 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B",
"base_model:adapter:unsloth/Llama-3.2-1B",
"license:llama3.2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-10-25T10:14:19Z | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: tuning-56d9075c-cf98-498b-8ad6-84bc66fb6ee2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- alpaca-cleaned_train_data.json
ds_type: json
path: /workspace/input_data/alpaca-cleaned_train_data.json
type:
field_input: input
field_instruction: output
field_output: instruction
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 2
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: masatochi/tuning-56d9075c-cf98-498b-8ad6-84bc66fb6ee2
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.06
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 3
mlflow_experiment_name: /tmp/alpaca-cleaned_train_data.json
model_type: LlamaForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 5
save_strategy: steps
sequence_len: 4096
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
val_set_size: 0.05
wandb_entity: lkotbimehdi
wandb_mode: online
wandb_project: lko
wandb_run: miner_id_24
wandb_runid: 56d9075c-cf98-498b-8ad6-84bc66fb6ee2
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# tuning-56d9075c-cf98-498b-8ad6-84bc66fb6ee2
This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9376 | 0.0005 | 1 | 3.2302 |
| 1.2639 | 0.0166 | 34 | 1.3728 |
| 1.3889 | 0.0333 | 68 | 1.2355 |
| 1.084 | 0.0499 | 102 | 1.1849 |
| 1.3211 | 0.0665 | 136 | 1.1617 |
| 0.8995 | 0.0831 | 170 | 1.1458 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.45.2
- Pytorch 2.4.1+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
shresthagarwal/Llama-3.2-1B-Instruct-LineItem | shresthagarwal | 2024-10-25T11:23:08Z | 124 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T11:16:40Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yzhany0r/detr_finetuned_cppe5 | yzhany0r | 2024-10-25T11:20:06Z | 30 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"conditional_detr",
"object-detection",
"generated_from_trainer",
"base_model:microsoft/conditional-detr-resnet-50",
"base_model:finetune:microsoft/conditional-detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-10-24T06:19:31Z | ---
library_name: transformers
license: apache-2.0
base_model: microsoft/conditional-detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr_finetuned_cppe5
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1933
- Map: 0.0
- Map 50: 0.0
- Map 75: 0.0
- Map Small: 0.0
- Map Medium: -1.0
- Map Large: -1.0
- Mar 1: 0.0
- Mar 10: 0.0
- Mar 100: 0.0
- Mar Small: 0.0
- Mar Medium: -1.0
- Mar Large: -1.0
- Map Coverall: 0.0
- Mar 100 Coverall: 0.0
- Map Face Shield: 0.0
- Mar 100 Face Shield: 0.0
- Map Gloves: 0.0
- Mar 100 Gloves: 0.0
- Map Goggles: 0.0
- Mar 100 Goggles: 0.0
- Map Mask: 0.0
- Mar 100 Mask: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
|:-------------:|:-----:|:----:|:---------------:|:---:|:------:|:------:|:---------:|:----------:|:---------:|:-----:|:------:|:-------:|:---------:|:----------:|:---------:|:------------:|:----------------:|:---------------:|:-------------------:|:----------:|:--------------:|:-----------:|:---------------:|:--------:|:------------:|
| No log | 1.0 | 213 | 1.7858 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| No log | 2.0 | 426 | 1.7974 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.612 | 3.0 | 639 | 1.6086 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.612 | 4.0 | 852 | 1.5644 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3972 | 5.0 | 1065 | 1.4356 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3972 | 6.0 | 1278 | 1.4547 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3972 | 7.0 | 1491 | 1.4207 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2125 | 8.0 | 1704 | 1.3967 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.2125 | 9.0 | 1917 | 1.3162 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.09 | 10.0 | 2130 | 1.3086 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.09 | 11.0 | 2343 | 1.3013 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9743 | 12.0 | 2556 | 1.2823 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9743 | 13.0 | 2769 | 1.2798 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.9743 | 14.0 | 2982 | 1.2379 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8793 | 15.0 | 3195 | 1.2404 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8793 | 16.0 | 3408 | 1.2136 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.7806 | 17.0 | 3621 | 1.2239 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.7806 | 18.0 | 3834 | 1.2372 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.7053 | 19.0 | 4047 | 1.2269 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.7053 | 20.0 | 4260 | 1.2231 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.7053 | 21.0 | 4473 | 1.2135 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.6369 | 22.0 | 4686 | 1.2037 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.6369 | 23.0 | 4899 | 1.2048 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5831 | 24.0 | 5112 | 1.1930 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5831 | 25.0 | 5325 | 1.2022 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5447 | 26.0 | 5538 | 1.1945 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5447 | 27.0 | 5751 | 1.1970 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5447 | 28.0 | 5964 | 1.1923 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5184 | 29.0 | 6177 | 1.1936 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5184 | 30.0 | 6390 | 1.1933 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | -1.0 | -1.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.2.2
- Datasets 3.0.2
- Tokenizers 0.20.1
|
huawei-noah/pangu-CodeCLM-partial-300m | huawei-noah | 2024-10-25T11:18:53Z | 6 | 0 | null | [
"pytorch",
"gpt2",
"python",
"code",
"dataset:huawei-noah/python_text2code",
"region:us"
] | null | 2024-09-05T12:56:09Z | ---
datasets:
- huawei-noah/python_text2code
tags:
- python
- code
---
# Model Card for pangu-CodeCLM-partial-300m
- **Repository:** https://github.com/huawei-noah/noah-research/tree/master/NLP/text2code_mrpt
- **Paper:** https://aclanthology.org/2024.eacl-long.72.pdf
## Model Description
This model is a PanGu-Alpha model further trained on text-to-code pairs
collected from public github repositories.
Training was performed with the CodeCLM objective, i.e. causal language modeling calculating loss only over code tokens and partial embedding separation (only Python-specific tokens are assigned a different embedding).
In order to use the model, first download it from the hub and have a look at the [evaluation section](https://github.com/huawei-noah/noah-research/blob/master/NLP/text2code_mrpt/README.md#evaluation).
## Citation [optional]
**BibTeX:**
```html
@inproceedings{christopoulou-etal-2024-text,
title = "Text-to-Code Generation with Modality-relative Pre-training",
author = "Christopoulou, Fenia and
Zhang, Guchun and
Lampouras, Gerasimos",
editor = "Graham, Yvette and
Purver, Matthew",
booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = mar,
year = "2024",
address = "St. Julian{'}s, Malta",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.eacl-long.72",
pages = "1194--1208",
abstract = "Large pre-trained language models have recently been expanded and applied to programming language tasks with great success, often through further pre-training of a strictly-natural language model{--}where training sequences typically contain both natural and (linearised) programming language. Such approaches effectively map both modalities of the sequence into the same embedding space. However, programming language keywords (e.g. {``}while{''}) often have very strictly defined semantics. As such, transfer learning from their natural language usage may not necessarily be beneficial to their code application and vise versa. Assuming an already pre-trained language model, in this work we investigate how sequence tokens can be adapted and represented differently, depending on which modality they belong to, and to the ultimate benefit of the downstream task. We experiment with separating embedding spaces between modalities during further model pre-training with modality-relative training objectives. We focus on text-to-code generation and observe consistent improvements across two backbone models and two test sets, measuring pass@$k$ and a novel incremental variation.",
}
```
## Model Card Authors [optional]
[Fenia Christopoulou](mailto:[email protected])
|
DeepMount00/Llama-3.1-Distilled | DeepMount00 | 2024-10-25T11:14:37Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"it",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T10:53:02Z | ---
language:
- it
- en
license: llama3
library_name: transformers
base_model: meta-llama/Meta-Llama-3-8B
## How to Use
---
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
MODEL_NAME = "DeepMount00/Llama-3.1-Distilled"
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=torch.bfloat16).eval()
model.to(device)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
def generate_answer(prompt):
messages = [
{"role": "user", "content": prompt},
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=200, do_sample=True,
temperature=0.001)
decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
return decoded[0]
prompt = "Come si apre un file json in python?"
answer = generate_answer(prompt)
print(answer)
```
---
## Developer
[Michele Montebovi]
|
huawei-noah/pangu-CodeCLM-300m | huawei-noah | 2024-10-25T11:12:46Z | 6 | 0 | null | [
"pytorch",
"gpt2",
"python",
"code",
"dataset:huawei-noah/python_text2code",
"region:us"
] | null | 2024-09-05T12:55:46Z | ---
datasets:
- huawei-noah/python_text2code
tags:
- python
- code
---
# Model Card for pangu-CodeCLM-300m
- **Repository:** https://github.com/huawei-noah/noah-research/tree/master/NLP/text2code_mrpt
- **Paper:** https://aclanthology.org/2024.eacl-long.72.pdf
## Model Description
This model is a PanGu-Alpha model further trained on text-to-code pairs
collected from public github repositories.
Training was performed with the CodeCLM objective, i.e. causal language modeling calculating loss only over code tokens.
In order to use the model, first download it from the hub and have a look at the [evaluation section](https://github.com/huawei-noah/noah-research/blob/master/NLP/text2code_mrpt/README.md#evaluation).
## Citation [optional]
**BibTeX:**
```html
@inproceedings{christopoulou-etal-2024-text,
title = "Text-to-Code Generation with Modality-relative Pre-training",
author = "Christopoulou, Fenia and
Zhang, Guchun and
Lampouras, Gerasimos",
editor = "Graham, Yvette and
Purver, Matthew",
booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = mar,
year = "2024",
address = "St. Julian{'}s, Malta",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.eacl-long.72",
pages = "1194--1208",
abstract = "Large pre-trained language models have recently been expanded and applied to programming language tasks with great success, often through further pre-training of a strictly-natural language model{--}where training sequences typically contain both natural and (linearised) programming language. Such approaches effectively map both modalities of the sequence into the same embedding space. However, programming language keywords (e.g. {``}while{''}) often have very strictly defined semantics. As such, transfer learning from their natural language usage may not necessarily be beneficial to their code application and vise versa. Assuming an already pre-trained language model, in this work we investigate how sequence tokens can be adapted and represented differently, depending on which modality they belong to, and to the ultimate benefit of the downstream task. We experiment with separating embedding spaces between modalities during further model pre-training with modality-relative training objectives. We focus on text-to-code generation and observe consistent improvements across two backbone models and two test sets, measuring pass@$k$ and a novel incremental variation.",
}
```
## Model Card Authors [optional]
[Fenia Christopoulou](mailto:[email protected])
|
mradermacher/Monstral-123B-i1-GGUF | mradermacher | 2024-10-25T11:08:10Z | 220 | 3 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"base_model:MarsupialAI/Monstral-123B",
"base_model:quantized:MarsupialAI/Monstral-123B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-25T06:42:12Z | ---
base_model: MarsupialAI/Monstral-123B
language:
- en
library_name: transformers
license: other
license_name: mrl
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/MarsupialAI/Monstral-123B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Monstral-123B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ1_S.gguf) | i1-IQ1_S | 26.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ1_M.gguf) | i1-IQ1_M | 28.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 32.5 | |
| [GGUF](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ2_S.gguf) | i1-IQ2_S | 38.5 | |
| [GGUF](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ2_M.gguf) | i1-IQ2_M | 41.7 | |
| [GGUF](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q2_K.gguf) | i1-Q2_K | 45.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 47.1 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ3_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ3_XS.gguf.part2of2) | i1-IQ3_XS | 50.2 | |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q3_K_S.gguf.part2of2) | i1-Q3_K_S | 52.9 | IQ3_XS probably better |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ3_S.gguf.part2of2) | i1-IQ3_S | 53.1 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ3_M.gguf.part2of2) | i1-IQ3_M | 55.4 | |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 59.2 | IQ3_S probably better |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 64.7 | IQ3_M probably better |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 65.5 | |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 69.4 | fast, low quality |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 69.7 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 73.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 84.5 | |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 86.6 | |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF/resolve/main/Monstral-123B.i1-Q6_K.gguf.part3of3) | i1-Q6_K | 100.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Monstral-123B-GGUF | mradermacher | 2024-10-25T11:08:10Z | 26 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"base_model:MarsupialAI/Monstral-123B",
"base_model:quantized:MarsupialAI/Monstral-123B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-24T16:48:13Z | ---
base_model: MarsupialAI/Monstral-123B
language:
- en
library_name: transformers
license: other
license_name: mrl
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MarsupialAI/Monstral-123B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Monstral-123B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q2_K.gguf) | Q2_K | 45.3 | |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q3_K_S.gguf.part2of2) | Q3_K_S | 52.9 | |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q3_K_M.gguf.part2of2) | Q3_K_M | 59.2 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q3_K_L.gguf.part2of2) | Q3_K_L | 64.7 | |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.IQ4_XS.gguf.part2of2) | IQ4_XS | 66.1 | |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q4_K_S.gguf.part2of2) | Q4_K_S | 69.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q4_K_M.gguf.part2of2) | Q4_K_M | 73.3 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q5_K_S.gguf.part2of2) | Q5_K_S | 84.5 | |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q5_K_M.gguf.part2of2) | Q5_K_M | 86.6 | |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q6_K.gguf.part3of3) | Q6_K | 100.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Monstral-123B-GGUF/resolve/main/Monstral-123B.Q8_0.gguf.part3of3) | Q8_0 | 130.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
qgallouedec/Qwen2-0.5B-OnlineDPO-PairRM | qgallouedec | 2024-10-25T10:48:52Z | 132 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"online-dpo",
"conversational",
"dataset:trl-lib/ultrafeedback-prompt",
"arxiv:2402.04792",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T09:45:28Z | ---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: trl-lib/ultrafeedback-prompt
library_name: transformers
model_name: Qwen2-0.5B-OnlineDPO-PairRM
tags:
- generated_from_trainer
- trl
- online-dpo
licence: license
---
# Model Card for Qwen2-0.5B-OnlineDPO-PairRM
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [trl-lib/ultrafeedback-prompt](https://huggingface.co/datasets/trl-lib/ultrafeedback-prompt) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen2-0.5B-OnlineDPO-PairRM", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/ffd4u5wa)
This model was trained with Online DPO, a method introduced in [Direct Language Model Alignment from Online AI Feedback](https://huggingface.co/papers/2402.04792).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0.dev0
- Pytorch: 2.4.1
- Datasets: 3.0.2
- Tokenizers: 0.20.0
## Citations
Cite Online DPO as:
```bibtex
@article{guo2024direct,
title = {{Direct Language Model Alignment from Online AI Feedback}},
author = {Shangmin Guo and Biao Zhang and Tianlin Liu and Tianqi Liu and Misha Khalman and Felipe Llinares and Alexandre Ram{'{e}} and Thomas Mesnard and Yao Zhao and Bilal Piot and Johan Ferret and Mathieu Blondel},
year = 2024,
eprint = {arXiv:2402.04792}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ZeroWw/EuroLLM-1.7B-Instruct-SILLY | ZeroWw | 2024-10-25T10:46:54Z | 8 | 0 | null | [
"gguf",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-25T10:45:32Z |
---
license: mit
language:
- en
pipeline_tag: text-generation
---
ZeroWw 'SILLY' version.
The original model has been quantized (fq8 version)
and a percentage of it's tensors have
been modified adding some noise.
Full colab: https://colab.research.google.com/drive/1a7seagBzu5l3k3FL4SFk0YJocl7nsDJw?usp=sharing
Fast colab: https://colab.research.google.com/drive/1SDD7ox21di_82Y9v68AUoy0PhkxwBVvN?usp=sharing
Original reddit post: https://www.reddit.com/r/LocalLLaMA/comments/1ec0s8p/i_made_a_silly_test/
I created a program to randomize the weights of a model. The program has 2 parameters: the percentage of weights to modify and the percentage of the original value to randmly apply to each weight.
At the end I check the resulting GGUF file for binary differences.
In this example I set to modify 100% of the weights of Mistral 7b Instruct v0.3 by a maximum of 15% deviation.
Since the deviation is calculated on the F32 weights, when quantized to Q8\_0 this changes.
So, in the end I got a file that compared to the original has:
Bytes Difference percentage: 73.04%
Average value divergence: 2.98%
The cool thing is that chatting with the model I see no apparent difference and the model still works nicely as the original.
Since I am running everything on CPU, I could not run perplexity scores or anything computing intensive.
As a small test, I asked the model a few questions (like the history of the roman empire) and then fact check its answer using a big model. No errors were detected.
Update: all procedure tested and created on COLAB.
Created on: Fri Oct 25, 10:45:32
|
anumafzal94/cs-test-model | anumafzal94 | 2024-10-25T10:45:09Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T10:38:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
qgallouedec/Qwen2-0.5B-OnlineDPO-GRM-Gemma | qgallouedec | 2024-10-25T10:45:05Z | 123 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"online-dpo",
"conversational",
"dataset:trl-lib/ultrafeedback-prompt",
"arxiv:2402.04792",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T09:20:10Z | ---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: trl-lib/ultrafeedback-prompt
library_name: transformers
model_name: Qwen2-0.5B-OnlineDPO-GRM-Gemma
tags:
- generated_from_trainer
- trl
- online-dpo
licence: license
---
# Model Card for Qwen2-0.5B-OnlineDPO-GRM-Gemma
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [trl-lib/ultrafeedback-prompt](https://huggingface.co/datasets/trl-lib/ultrafeedback-prompt) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/Qwen2-0.5B-OnlineDPO-GRM-Gemma", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/520cnnjl)
This model was trained with Online DPO, a method introduced in [Direct Language Model Alignment from Online AI Feedback](https://huggingface.co/papers/2402.04792).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0.dev0
- Pytorch: 2.4.1
- Datasets: 3.0.2
- Tokenizers: 0.20.0
## Citations
Cite Online DPO as:
```bibtex
@article{guo2024direct,
title = {{Direct Language Model Alignment from Online AI Feedback}},
author = {Shangmin Guo and Biao Zhang and Tianlin Liu and Tianqi Liu and Misha Khalman and Felipe Llinares and Alexandre Ram{'{e}} and Thomas Mesnard and Yao Zhao and Bilal Piot and Johan Ferret and Mathieu Blondel},
year = 2024,
eprint = {arXiv:2402.04792}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin GallouΓ©dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sjkwon/3999_sft-mdo-diverse-train-nllb-200-600M | sjkwon | 2024-10-25T10:42:06Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | 2024-10-25T10:40:19Z | ---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="sjkwon//tmp/tmp3rhzolhy/sjkwon/3999_sft-mdo-diverse-train-nllb-200-600M")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("sjkwon//tmp/tmp3rhzolhy/sjkwon/3999_sft-mdo-diverse-train-nllb-200-600M")
model = AutoModelForCausalLMWithValueHead.from_pretrained("sjkwon//tmp/tmp3rhzolhy/sjkwon/3999_sft-mdo-diverse-train-nllb-200-600M")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Keltezaa/furry-enhancer | Keltezaa | 2024-10-25T10:34:26Z | 38 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"animals",
"art",
"anthro",
"furry",
"photorealism",
"creature",
"animal",
"anthromorphic",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T10:34:19Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=False&allowCommercialUse=Image&allowDerivatives=False&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- animals
- art
- anthro
- furry
- photorealism
- creature
- animal
- anthromorphic
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Anthro
widget:
- text: ' '
output:
url: >-
33728188.jpeg
- text: ' '
output:
url: >-
33727667.jpeg
- text: ' '
output:
url: >-
33727758.jpeg
- text: ' '
output:
url: >-
33728721.jpeg
- text: ' '
output:
url: >-
33727759.jpeg
- text: ' '
output:
url: >-
33728059.jpeg
- text: ' '
output:
url: >-
33728347.jpeg
- text: ' '
output:
url: >-
33728222.jpeg
- text: ' '
output:
url: >-
33728343.jpeg
- text: ' '
output:
url: >-
33728344.jpeg
- text: ' '
output:
url: >-
33728345.jpeg
- text: ' '
output:
url: >-
33728350.jpeg
- text: ' '
output:
url: >-
33728346.jpeg
- text: ' '
output:
url: >-
33728349.jpeg
- text: ' '
output:
url: >-
33728354.jpeg
- text: ' '
output:
url: >-
33728351.jpeg
- text: ' '
output:
url: >-
33728352.jpeg
- text: ' '
output:
url: >-
33728357.jpeg
- text: ' '
output:
url: >-
33728353.jpeg
- text: ' '
output:
url: >-
33728569.jpeg
---
# Furry Enhancer
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>This Lora is made to enhance to ability / quality to create anthro and furry, and creatures with sdxl. but you can also use it to enhance also animals or fur clothing, etc.</p><p></p><p></p><p>its trained and tested on my models but it work on also on other types of models include some pony models . i highly recommend to use it with the following models.</p><p>: <a target="_blank" rel="ugc" href="https://civitai.com/models/470287/midgard-pony-thl-sdxl">Midgard Pony, </a><a target="_blank" rel="ugc" href="https://civitai.com/models/192854/ratatoskr-animal-creature-and-furry">Ratatoskr</a> , <a target="_blank" rel="ugc" href="https://tensor.art/models/690973686016215499">BifrΓΆst Project</a> ,<a target="_blank" rel="ugc" href="https://civitai.com/models/122793/fenrisxl"> Fenrisxl</a> ,<a target="_blank" rel="ugc" href="https://tensor.art/models/695459826601380688">Yggdrasil,</a> <a target="_blank" rel="ugc" href="https://civitai.com/models/463847/midgard-thl-hybrid">Midgard</a></p><p>the lora can be used for SFW and NSFW.</p><p></p><p>Changelog:</p><p>V1 This is the first Version and still in WIP</p><p>i recommend using a strength of 0.7-0.9</p><p>to get realistic output use the trigger word "anthro"</p><p>for more digital art style "furry"</p><p><strong>V2.84</strong></p><p>added around 200+ more training images (around 800 total) and did a complete retain</p><ul><li><p>more detailed</p></li><li><p>more specim</p></li><li><p>better nsfw part (man) wip</p></li><li><p>"emotions" ( wip)</p></li></ul><p>I recommend using the Model <strong>Ratatoskr</strong> or <strong>BifrΓΆst Project</strong></p><p>Please be so kind and keep the Showcase clean of heavy NSFW Stuff and check out my other stuff too ;-)</p><p>and thanks to <a target="_blank" rel="ugc" href="https://civitai.com/user/S1LV3RC01N"><strong><u>S1LV3RC01N</u></strong></a><strong> </strong>for the help please check out her models too</p><p><strong>V5.2</strong></p><p>this version is completely new trained from ground up, over weeks. on the base of my BifrΓΆst Project Model</p><ul><li><p>better look</p></li><li><p>more supported models</p></li><li><p>more specimen</p></li><li><p>better in nsfw</p></li><li><p>etc</p><p>i will provide the complete wordlist in the future</p></li><li><p>can turn with a higher strength ( 1-1,3 ) basically any model into a furry model ππ</p></li></ul><p><strong>V6.1</strong></p><p>Model got retained again with over 342K Steps and a Training time of 300+ Hours</p><ul><li><p>More versatile</p></li><li><p>better details</p></li><li><p>better in NSFW</p></li><li><p>compatible with different types of model ( sdxl 1.0, lightning, hayper, turbo, pony)</p><p></p><p>the tagglist is inside the "training" folder i also included some example Pictures with comfy workflow embedded.</p></li></ul><p><strong>V1.0 Flux (Photoreal)</strong></p><p>First Flux Version of the enhancer trained on parts of the SDXL Version of the enhancer but the tagging got optimized for flux. Its able to bring furries in nearly any Flux model.</p><p>note: it's "only" trained at 960 pictures yet.</p><p>Note: There will be 2 Versions: Photoreal, and Digital Art</p><p>the Digital Art is still in training and follow later.</p><p>it can do sfw and nsfw.</p><p>Have fun with this version of the Furry enhancer</p><p>I can recommend my <a target="_blank" rel="ugc" href="https://civitai.com/models/684964/workflow-for-ratatoskr-flux-include-custom-clipl">Workflow</a> for it.</p><p>Also please Check out my Flux Model <a target="_blank" rel="ugc" href="https://civitai.com/models/681795?modelVersionId=763114">Ratatoskr Flux</a></p><p></p><p>This disclaimer is not here without reasons. please have in mind this is like a virtual showroom and posting extreme stuff is like π© on my desk in real live π</p><p>made by <a target="_blank" rel="ugc" href="https://civitai.com/user/freek22/models">Freek22</a></p><p></p>
## Trigger words
You should use `Anthro`, `Furry`, `Photo` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/furry-enhancer/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/furry-enhancer', weight_name='Furry_Enhancer PhotoV3.420.safetensors')
image = pipeline('`Anthro`, `Furry`, `Photo`').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Keltezaa/jessica-simpson-flux-model | Keltezaa | 2024-10-25T10:33:52Z | 85 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"woman",
"celebrity",
"jessica simpson",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T10:15:38Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- woman
- celebrity
- jessica simpson
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: jsimp
widget:
- text: ' '
output:
url: >-
32836407.jpeg
- text: ' '
output:
url: >-
32730380.jpeg
- text: ' '
output:
url: >-
32883280.jpeg
- text: ' '
output:
url: >-
36016253.jpeg
- text: ' '
output:
url: >-
36017318.jpeg
---
# Jessica Simpson Flux Model
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>Jessica Ann Simpson, born July 10, 1980, in Abilene, TX, is an American recording artist, actress, and fashion designer. Simpson's career began in the 1990s when she independently recorded and distributed a gospel album titled "Jessica."Β She later signed a recording contract with Columbia Records, which released her debut studio album, "Sweet Kisses," in 1999.Β It sold two million copies, supported by the singles "I Wanna Love You Forever,β "I Think Iβm in Love with You,β and βWhere You Are.β In 2001, Simpson released her second studio album, "Irresistible.βΒ While its lead single - the title track - was a hit, the album was poorly received and sold less than half of its predecessor.Β During this time, Simpson complained that her label instructed her to lose weight and focus more on dance/choreography, to compete with the likes of Britney Spears and Janet Jackson. Simpson married Nick Lachey, of the boy band 98 Degrees, in 2002 and their relationship was chronicled on the MTV reality series "Newlyweds: Nick & Jessica." The show was a sensation, turning Simpson into a household name, with many people gravitating toward her humorous βdumb blondeβ persona. Simpsonβs third studio album, βIn This Skin,β was released in tandem with the show and eventually sold over seven million copies worldwide, becoming Simpson's best-selling album. "Newlyweds" ended in 2005 after four seasons, and Simpson and Lachey announced that they were divorcing later that year. Simpson made her big-screen debut in the film "The Dukes of Hazzard" (2005), portraying Daisy Duke. Despite mixed reviews, it was a box office success.Β Simpson's other screen credits include the films "Employee of the Month" (2006), "Blonde Ambition" (2007), and "Private Valentine: Blonde & Dangerous" (2008), and the television series "The Twilight Zone" (2002) and "That '70s Show." A sitcom pilot based around Simpson was shot in 2004 but failed to garner interest from any networks. Β In 2005, Simpson launched her own fashion line, the Jessica Simpson Collection.Β Initially focused on shoes only, it later expanded into clothing, accessories, and fragrances, and has generated over $1 billion in revenue. Throughout the 2000s, Simpson also continued with her music career, releasing the pop album βA Public Affair" (2006); βDo You Know" (2008), a brief foray into country music; and two holiday records, βReJoyce: The Christmas Albumβ (2004) and Happy Christmas" (2010). To date, she has sold more than 20 million albums worldwide.Β In 2011, Simpson hosted the VH1 docuseries "The Price of Beauty," which explored different cultural perceptions of beauty. After a decade spent building her brand and raising her family, Simpson returned to the spotlight in 2020 with the release of her memoir, the aptly titled βOpen Book.βΒ In the New York Times bestseller, she candidly discussed her struggles with body image, self-love, and substance use.Β An EP that accompanied the audiobook was Simpsonβs first music release in a decade.Β Standalone single βParticles,β a cover of the Nothing but Thieves track, followed in 2021.</p>
## Trigger words
You should use `jsimp` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/jessica-simpson-flux-model/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/jessica-simpson-flux-model', weight_name='4fa088cbab0d4e4198fe0ffd8fb32f2f_pytorch_lora_weights.safetensors')
image = pipeline('`jsimp`').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Keltezaa/alexandra-daddario-flux | Keltezaa | 2024-10-25T10:33:48Z | 54 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"woman",
"actress",
"celebrity",
"realistic",
"alexandra daddario",
"flux1.d",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T10:15:33Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=False&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- woman
- actress
- celebrity
- realistic
- alexandra daddario
- flux1.d
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: alexandra_daddario
widget:
- text: 'A close-up portrait of alexandra_daddario with wet, dark hair cascading down her shoulders and shoulders, creating a dramatic and ethereal effect. Her gaze is intense and focused, and her makeup is dramatic, emphasizing her eyes and lips. The background is a deep, muted brown, contrasting with the woman''s dark hair and skin tones. The image is a monochromatic portrait, emphasizing the subject''s features and the wetness of her hair. The style is dramatic and evocative, with a focus on capturing the essence of the moment rather than intricate details.'
output:
url: >-
34341584.jpeg
- text: 'A close-up portrait of alexandra_daddario, captured in a side profile. She is surrounded by a flurry of shimmering, iridescent crystals that catch the light, creating a mesmerizing effect. The woman''s gaze is directed to the side, her gaze intense and captivating. Her skin is a warm, golden-brown, and her lips are a soft pink. She wears a simple, light-colored top that contrasts with the sparkling crystals. The blurred background emphasizes the woman and the crystals, highlighting their intricate details. The overall mood of the image is serene and ethereal, evoking feelings of wonder and enchantment.'
output:
url: >-
34341968.jpeg
- text: 'A close-up portrait of alexandra_daddario with a striking, artistic makeup style. The subject has a pale complexion, dark eyeshadow, and a black nose piercing. The makeup features intricate black and white designs resembling a skull or skeleton. The person has wet, wavy black hair that falls to their shoulders. A dark background highlights the subject''s face and neck, which features a detailed, skeletal design. The image conveys a somber and intense mood, evoking a sense of decay and decay.'
output:
url: >-
34341656.jpeg
- text: 'A portrait of alexandra_daddario lying on a wooden bench in a forest. She has long, flowing, reddish-brown hair and striking blue eyes. She wears a blue floral dress with pink and white flowers and a dark, textured sweater. Her pose is relaxed; one arm rests on the bench, the other rests on her chest. The background is blurred, indicating a natural setting with greenery. The image style is candid and natural, capturing a serene moment.'
output:
url: >-
34341653.jpeg
- text: 'A serene portrait of alexandra_daddario lying in a body of water, her head resting on a bed of green grass. The sun casts a warm, golden light over the scene, creating a dreamy atmosphere. The woman has long, wavy, reddish-black hair and wears a light-colored blouse. The water reflects the surrounding greenery, and the background is blurred, emphasizing the subject. The image conveys a sense of tranquility and contemplation.'
output:
url: >-
34341659.jpeg
- text: 'A portrait of a young woman with long, wavy blonde hair, captured in a side profile. She wears a dark coat, and her expression is contemplative. The setting is an urban environment, with a building visible in the background. Soft lighting casts a warm glow on her face and hair, highlighting her features and the texture of her clothing. The image conveys a serene and introspective mood.'
output:
url: >-
34341767.jpeg
- text: 'A portrait of alexandra_daddario seated on a dark couch. She wears a long-sleeved, purple dress with intricate lace detailing. Her hair is pulled back into a bun, and she gazes directly at the camera with a contemplative expression. Soft lighting casts a warm glow on her face, highlighting her features and clothing. The background is blurred, highlighting the subject. The image style is contemporary, emphasizing the subject''s natural beauty and the interplay of light and shadow.'
output:
url: >-
34341768.jpeg
- text: 'A portrait of alexandra_daddario with shoulder-length, wavy black hair. She has striking green eyes and a neutral expression. Her lips are painted a soft pink, and she holds her hair with both hands. She wears a black top. The background is a textured turquoise wall with visible rust. The image style is contemporary, emphasizing the subject''s natural beauty and the muted color palette.'
output:
url: >-
34341892.jpeg
- text: 'A portrait of alexandra_daddario with long, vibrant black hair. She wears a sleeveless dress with a tropical print of green, red, and yellow leaves and flowers. The dress has a high neckline and a cinched waist. Her pose is confident: one hand rests on her head, fingers lightly touching her hair. The background is a plain, light-colored wall. The image style is contemporary, emphasizing the subject''s natural beauty and elegance.'
output:
url: >-
34341920.jpeg
- text: 'A dreamy, ethereal portrait of alexandra_daddario in a field of vibrant red flowers. The woman, with long, wavy black hair, is the central subject, resting her head on her hand, appearing serene and contemplative. Her eyes are closed, and her lips are painted a deep red. The flowers surrounding her are a mix of red and blue, with some white blossoms and green leaves. The background is blurred, emphasizing the woman and the flowers, creating a dreamy atmosphere. The color palette is dominated by deep reds, purples, and blues, with the red flowers providing a striking contrast. The image has a romantic and ethereal style, blending realism and fantasy.'
output:
url: >-
34341957.jpeg
- text: 'A close-up portrait of alexandra_daddario with long, straight black hair. She wears a red top, and her gaze is directed slightly to the side. The background is blurred, highlighting the subject. Soft lighting illuminates her face and hair, creating a dramatic effect. The color palette consists primarily of warm tones, with the black hair contrasting against the deep red of her top.'
output:
url: >-
34341977.jpeg
- text: 'A portrait of alexandra_daddario with long, reddish-brown hair, set against a blurred green background. She wears a mustard-yellow, textured sweater that reaches her shoulders. Her fair skin has freckles, and her striking blue eyes are accentuated by her makeup. The woman''s pose is relaxed, with her hands gently touching her neck. The image conveys a serene and contemplative mood.'
output:
url: >-
34341992.jpeg
- text: 'alexandra_daddario, A portrait of a woman, captured in a moody, atmospheric style. She is positioned against a dark, textured background, emphasizing her face and upper body. Her gaze is direct and contemplative, her hand gently touching her cheek. She wears a black, long-sleeved top that contrasts with the muted tones of the background. Soft lighting casts a gentle glow on her face, highlighting her features and the texture of her clothing. The image conveys a sense of introspection and introspection.'
output:
url: >-
34342019.jpeg
- text: 'A portrait of alexandra_daddario with long, silver hair, standing in a dimly lit forest. She wears a black dress with a delicate lace collar. Her pose is contemplative, with her hands resting on her hips. The background is a blend of green foliage and a wooden structure, creating a rustic ambiance. The color palette consists primarily of dark greens, browns, and blacks, with the woman''s silver hair contrasting the lighter background.'
output:
url: >-
34342117.jpeg
- text: 'A portrait of alexandra_daddario with striking pink and purple hair, captured in a candid moment. She holds a Pentax camera with a large lens, indicating she is a professional photographer. She wears a light gray off-shoulder sweater. The background is a textured white brick wall, contrasting with the warm tones of her hair and the camera. The image conveys a serene and contemplative mood, emphasizing the subject''s natural beauty and the intricate details of her photography.'
output:
url: >-
34342054.jpeg
- text: 'A vivid portrayal of alexandra_daddario soldier in a post-apocalyptic setting. She is in the foreground, aiming a rifle with a scope, her focused expression conveying determination. The soldier wears a black t-shirt, gloves, and protective gear. The background shows a chaotic scene of fire and debris, with a large barrel and a helmet partially visible. The color palette is dominated by fiery oranges, reds, and browns, creating a sense of urgency and danger.'
output:
url: >-
34342108.jpeg
- text: 'A portrait of alexandra_daddario dressed in a gothic-inspired costume, standing outdoors against a blurred background of greenery. The subject wears a black corset with silver spikes and a ruffled skirt, a belt with a skull pendant, and a large, feathered headpiece. Their face is painted with striking red eyeshadow and black makeup, creating a grim reaper-like appearance. The person''s pose is confident and poised, with one hand raised and the other resting on their hip. The overall color palette is dark, dominated by black, silver, and red.'
output:
url: >-
34342101.jpeg
- text: 'A portrait of alexandra_daddario lying on a bed. She wears a gray tank top and a yellow towel wrapped around her head. The bedspread features black and white circular patterns. The woman''s relaxed pose includes one arm resting on her hip and the other on the pillow. The background is blurred, emphasizing the subject. The image is candid and natural, capturing a moment of tranquility.'
output:
url: >-
34342067.jpeg
- text: 'A portrait of alexandra_daddario in a vintage-style bar. She wears a red dress with a black collar and holds a glass of orange juice. Her black hair flows down her back, and she smiles at the camera. The bar is rustic with wooden beams, hanging baskets, and a sign reading ''CUMMEAD''. The background shows shelves stocked with bottles and a counter with alexandra_daddario seated at the bar. The color palette is warm, dominated by browns, reds, and yellows, with the orange juice providing a vibrant contrast.'
output:
url: >-
34342295.jpeg
- text: 'A portrait of alexandra_daddario with long, wavy, reddish-black hair, set against a backdrop of autumn leaves in shades of red, yellow, and orange. She wears a sleeveless, off-white dress with a gold corset-like design. Her makeup is dramatic, featuring bold red lipstick and smoky eyeshadow. The woman''s pose is contemplative, with one hand resting gently on her shoulder. The image has a dreamy, ethereal quality, enhanced by the warm colors of the leaves.'
output:
url: >-
34342037.jpeg
---
# Alexandra Daddario FLUX
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>Alexandra Anna Daddario is an American actress. Her breakthrough was portraying Annabeth Chase in the Percy Jackson film series.</p>
## Trigger words
You should use `alexandra_daddario` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/alexandra-daddario-flux/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/alexandra-daddario-flux', weight_name='AlexandraDaddarioF1D.safetensors')
image = pipeline('A portrait of alexandra_daddario with long, wavy, reddish-black hair, set against a backdrop of autumn leaves in shades of red, yellow, and orange. She wears a sleeveless, off-white dress with a gold corset-like design. Her makeup is dramatic, featuring bold red lipstick and smoky eyeshadow. The woman's pose is contemplative, with one hand resting gently on her shoulder. The image has a dreamy, ethereal quality, enhanced by the warm colors of the leaves.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Keltezaa/brooke-shields | Keltezaa | 2024-10-25T10:33:38Z | 26 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"celebrity",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T10:15:25Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- celebrity
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
widget:
- text: ' , female , posing for photographer, thin, tall,, smiling, happy,'
output:
url: >-
35262296.jpeg
- text: ' , female , posing for photographer, thin, tall,, smiling, happy,'
output:
url: >-
35262293.jpeg
---
# Brooke Shields
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>One more girl I used to crush on, when I was a younger....</p><p>Follow me for more!!</p><p>Niko3DX</p>
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/brooke-shields/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/brooke-shields', weight_name='brooke-shields.safetensors')
image = pipeline(' , female , posing for photographer, thin, tall,, smiling, happy,').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Keltezaa/britney-spears-flux-model | Keltezaa | 2024-10-25T10:33:23Z | 67 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"photorealistic",
"celebrity",
"britney spears",
"flux1.d",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T10:15:03Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- photorealistic
- celebrity
- britney spears
- flux1.d
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: britney
widget:
- text: ' '
output:
url: >-
32347562.jpeg
- text: ' '
output:
url: >-
32948418.jpeg
- text: ' '
output:
url: >-
35870187.jpeg
- text: ' '
output:
url: >-
32349015.jpeg
- text: ' '
output:
url: >-
32347485.jpeg
---
# Britney Spears Flux Model
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>Britney Jean Spears, born December 2, 1981, in McComb, MS, is an American recording artist and entertainer.Β Oft referred to as the "Princess of Pop,β she is credited with the revival of pop music during the late 1990s and early 2000s, and is recognized as an icon. Spears has sold an estimated 150 million records worldwide, making her one of the world's best-selling music artists. She ranks as the best-selling female albums artist of the 2000s, the eighth-biggest artist overall of the 2000s, and the fourth best-selling female albums artist and tenth best-selling digital artist in history. Spears has earned countless awards and accolades, including a Grammy Award, 15 Guinness World Records, Billboardβs Millennium Award, GLAADβs Vanguard Award, the inaugural Radio Disney Icon Award, MTVβs Michael Jackson Video Vanguard Award, and a star on the Hollywood Walk of Fame. In 2020, Rolling Stone named her song ββ¦Baby One More Timeβ the best debut single of all time. Time selected Spears as one of its 100 Most Influential People in 2021.Β </p><p></p><p>Spears made her local stage debut at age 5, singing βWhat Child Is This?β at her kindergarten graduation.Β Throughout her childhood, Spears took voice, dance, and gymnastic lessons, while competing in pageants and talent shows. For a short time, before turning her focus to music and dance, she trained with famed Olympics gymnastics coach Bela Karolyi. In 1993, Spears was cast on Disney's βThe New Mickey Mouse Club," alongside other future stars like Christina Aguilera, Justin Timberlake, Ryan Gosling, and Keri Russell. She remained on the series until its cancellation two years later. Spears signed a record deal with Jive Records in 1997, when she was 15. Her first single, ββ¦Baby One More Time,β was released in October 1998. Buoyed by its controversial music video, the song reached No. 1 in 23 countries, propelling Spears to international superstardom and ushering in a new era of pop music. Spearsβ debut studio album, also titled ββ¦Baby One More Time," arrived in January 1999. It debuted at No. 1 in the US, making Spears the first artist in history to have both the No. 1 song and album in the same week.Β In total, "...Baby One More Time" sold over 25 million copies worldwide.</p><p></p><p>Spears' sophomore album, "Oops!... I Did It Again" (2000), sold 1.3 million copies in its first week alone and held the record for the fastest-selling album by a female artist in the US for 15 years. Spears adopted a more mature sound and style for her third and fourth albums, 2001's "Britney" and 2003's "In the Zone." Despite backlash over Spearsβ increasingly provocative image, both albums sold over 10 million copies worldwide.</p><p></p><p>Spears made her big-screen debut in the motion picture βCrossroads" (2002), written by Shonda Rhimes and co-starring Dan Ackroyd, Kim Cattrall, Zoe Saldana, and Taryn Manning. She has also guest-starred on βGlee,β βHow I Met Your Mother,β βWill & Grace,β βSabrina, the Teenage Witch,β and βJane the Virgin,β and has twice hosted βSaturday Night Liveβ and appeared as musical guest three times.Β </p><p></p><p>In 2004, Spears partnered with Elizabeth Arden to launch her first perfume, Curious. Spears currently has over 30 fragrances to her name, available in 85 countries, with sales exceeding $1.5 billion.</p><p></p><p>Spears served as executive producer of her fifth studio album, βBlackout" (2007). Though it initially received lukewarm reviews, βBlackoutβ has since been recognized as one of the most influential albums of its time. Β In 2008, after a bout of personal struggles that were breathlessly documented by the media, Spears was placed in a conservatorship that stripped her of all personal autonomy and put her estranged father in control of her person and estate. (The conservatorship remained in place until November 2021.Β Spears has described the abuse, isolation, and forced labor that she endured while under her fatherβs control.) Soon after the conservatorship was implemented, Spears returned to work, releasing the chart-topping albums βCircusβ (2008) and βFemme Fataleβ (2011), both of which were supported by extensive worldwide concert tours.</p><p></p><p>In 2012, Spears appeared as a judge on "X-Factor USA," becoming, at the time, the highest-paid reality TV judge in history. That same year, Spears was featured on <a target="_blank" rel="ugc" href="http://will.i.am">will.i.am</a>'s single βScream & Shout."Β Recpetion to the song was mixed, but it peaked at No. 3 on the Hot 100 and became the very first No. 1 on Billboard's new Dance/Electronic Songs chart. <a target="_blank" rel="ugc" href="http://will.i.am">will.i.am</a> later executive-produced Spearsβ eighth studio album, βBritney Jean" (2013). In December 2013, Spears began a Las Vegas concert residency, βBritney: Piece of Me,β at Planet Hollywood Resort & Casino. The show was initially scheduled to run for two years, but was extended several times due to its enduring popularity. It ultimately concluded in December 2017. Spears and her residency revitalized the Vegas strip, and the show won numerous awards during its run, including Best Show in Vegas and Best Bachelorette Show in Vegas.Β In 2015, Spears released the single βPretty Girls,β featuring Iggy Azalea, and contributed vocals to Giorgio Moroder's βTomβs Diner.β Reportedly, Spears approached Moroder to collaborate on the song. Spearsβ ninth studio album, βGlory,β arrived in August 2016, preceded by the Top 20 hit "Make Me..." with G-Eazy.Β Spears later took her Vegas show on the road throughout 2017 and 2018, with dates in some counties that she had never toured previously. "Glory" was re-released in 2020 with updated cover art and additional songs following a successful fan campaign to push βMood Ringβ - originally a Japan-only bonus track - to No. 1 on iTunes.</p><p></p><p>Spears teamed up with Elton John in 2022 to release the single "Hold Me Closer," which debuted at No. 6 on the Hot 100 and became Spearsβ highest-charting single in a decade. Also in 2022, publishing house Simon & Schuster signed Spears to a book deal worth a staggering $15 million. Spearsβ hotly-anticipated memoir, βThe Woman in Me,β hit shelves in October 2023.Β In its first week, it sold 1.1 million copies in the US and 2.4 million copies worldwide, immediately becoming a New York Times #1 bestseller, as well as the fastest-selling title in Simon & Schusterβs history.Β A film adaptation of Spearsβ memoir, to be directed by Jon Chu, was announced in 2024.</p><p></p><p>After reports surfaced that Spears was working on a new album, she clarified via Instagram that she currently has no plans to return to the music industry. Β </p>
## Trigger words
You should use `britney` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/britney-spears-flux-model/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/britney-spears-flux-model', weight_name='lora.safetensors')
image = pipeline('`britney`').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Keltezaa/sarah-michelle-gellar-flux-model | Keltezaa | 2024-10-25T10:33:15Z | 77 | 4 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"sarah michelle gellar",
"woman",
"celebrity",
"buffy the vampire slayer",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T10:14:49Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- sarah michelle gellar
- woman
- celebrity
- buffy the vampire slayer
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: smg
widget:
- text: ' '
output:
url: >-
35741904.jpeg
- text: ' '
output:
url: >-
35740912.jpeg
- text: ' '
output:
url: >-
35532092.jpeg
- text: ' '
output:
url: >-
35532429.jpeg
- text: ' '
output:
url: >-
35532091.jpeg
---
# Sarah Michelle Gellar Flux Model
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>Sarah Michelle Prinze (nΓ©e Gellar), born April 14, 1977, in New York City, NY, is an American actress, producer, and entrepreneur. As a child, Gellar modeled for Wilhelmina and appeared in commercials for Burger King, Avon, and Shake βn Bake. She featured in the fast food industryβs very first βattack ad,β in which she negatively compared McDonald's' food to that of Burger King. The former was outraged by the commercial and sued Burger King, naming the 5-year-old Gellar as a defendant. She reportedly received a lifetime ban from McDonald's that forbade her from eating at any of their restaurants. In 1983, Gellar made her screen acting debut in the television film "An Invasion of Privacy," after impressing the casting director by reading both her own and co-star Valerie Bertolini's lines in her audition. After a stint on the short-lived teen drama series βSwans Crossing" playing mean girl Sydney Rutledge, she was cast as a similar character, Kendall Hart, on the soap βAll My Children,β for which she received a Daytime Emmy Award for Outstanding Younger Actress. From 1997-2003, Gellar headlined the supernatural drama βBuffy the Vampire Slayer,β a hugely popular and influential series that launched her to stardom and brought her widespread acclaim and recognition, including a Golden Globe nomination for Best Performance by an Actress in a Television Series - Drama, a Saturn Award for Best Genre TV Actress, and a SFX Award for Best TV Actress, among many others. Gellar has also achieved significant success in film, having starred in box office hits like βI Know What You Did Last Summerβ (1997), βScream 2β (1997), βCruel Intentionsβ (1999), βScooby-Dooβ (2002) and βScooby-Doo 2: Monsters Unleashedβ (2004), βThe Grudgeβ (2004) and βThe Grudge 2β (2006), and βTMNTβ (2007), as well as independent films such as βSouthland Talesβ (2006), βThe Air I Breatheβ (2007), and βVeronika Decides to Dieβ (2009). Her small role in the 2022 film βDo Revengeβ was written specifically for her, envisioned as an adult version of her character in βCruel Intentions.β Gellar's other notable credits include the television series βRinger, βThe Crazy Ones,β and βWolf Pack,β all of which were canceled after just one season, and the upcoming βDexter: Original Sin." Because of her involvement in so many horror-adjacent projects, Gellar has been dubbed the scream queen of her generation. In 2015, Gellar co-founded Foodstirs, an e-commerce startup selling organic baking kits, and later released her own cookbook, βStirring Up Fun with Food.β Gellar has been married to her βI Know What You Did Last Summerβ and βScooby-Dooβ co-star Freddie Prinze, Jr. since 2002.</p>
## Trigger words
You should use `smg` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/sarah-michelle-gellar-flux-model/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/sarah-michelle-gellar-flux-model', weight_name='SMGFluxModel.safetensors')
image = pipeline('`smg`').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Keltezaa/jenna-ortega-flux | Keltezaa | 2024-10-25T10:33:04Z | 136 | 2 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"jenna ortega",
"woman",
"actress",
"celebrity",
"realistic",
"flux1.d",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T10:14:05Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=False&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- jenna ortega
- woman
- actress
- celebrity
- realistic
- flux1.d
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: jenna_ortega
widget:
- text: 'A portrait of jenna_ortega in a dark, vintage setting. She holds a book titled ''Love is a LIE'' with a black and white image of jenna_ortega''s face on the cover.The woman wears a white collar and a black jacket. Her hair is styled in two braids. She looks directly at the camera with a serious expression. Behind her is a wooden wall with intricate carvings and a framed portrait depicting the frankenstein monster. The color palette consists of dark browns, blacks, and whites, creating a somber and reflective atmosphere.'
output:
url: >-
32806179.jpeg
- text: 'A portrait of jenna_ortega in a serene outdoor setting. She wears a floral crown of white and red flowers and holds a bouquet of yellow and orange flowers. Her long black hair
flows down her back, and she wears a single red rose. The background is softly blurred, emphasizing the woman and the flowers. Sunlight filters through the leaves, casting a warm glow on the scene. The image has a dreamy, ethereal quality, with a color palette dominated by earthy tones.'
output:
url: >-
32806239.jpeg
- text: 'A portrait of jenna_ortega standing in a blossoming garden. She wears a wide-brimmed straw hat and a floral-patterned blouse. The blouse is white with pink and purple floral designs. The woman holds the hat with one hand and adjusts it with the other. The background is filled with delicate white blossoms, creating a serene and dreamy atmosphere. The image''s color palette is soft and muted, with the white blouse contrasting against the pink blossoms.'
output:
url: >-
32806240.jpeg
- text: 'A black and white portrait of jenna_ortega at a festival. She wears a bohemian-style outfit with intricate patterns and layered jewelry, including multiple necklaces, earrings, and bracelets. Her hair is styled in loose waves, and she wears a headband. The woman stands confidently with her hands in her pockets, displaying a belt with a circular buckle. The background shows a crowd of people, tents, and a clear sky. The image is candid, capturing a moment of casual enjoyment at the festival.'
output:
url: >-
32806242.jpeg
- text: 'A portrait of jenna_ortega in a dark, vintage setting. She holds a book titled ''I hate everything'' with a black and white image of jenna_ortega''s face on the cover. There is text on the bottom of the book cover that reads ''Jenna Ortega''.The woman wears a white collar and a black jacket. Her hair is styled in two braids. She looks directly at the camera with a serious expression. Behind her is a wooden wall with intricate carvings and a framed portrait depicting the frankenstein monster. The color palette consists of dark browns, blacks, and whites, creating a somber and reflective atmosphere.'
output:
url: >-
32806243.jpeg
- text: 'A close-up portrait of jenna_ortega with dark hair styled in two braids. She wears a dark suit with a white shirt and a black tie. The woman points directly at the camera with a slight smile. The background is dimly lit with a warm, amber glow, creating a moody atmosphere. The image style is dramatic and evocative, emphasizing the subject''s expression and the interplay of light and shadow.'
output:
url: >-
32806245.jpeg
- text: 'A portrait of jenna_ortega with long, wavy brown hair. She wears a white, off-the-shoulder sweater. The woman''s gaze is direct and intense, and her lips are painted a soft pink. The background is a warm, fiery orange and yellow, creating a bokeh effect. The image style is candid and natural, emphasizing the subject''s features and expressions.'
output:
url: >-
32806264.jpeg
- text: 'A portrait of jenna_ortega in a dark, vintage setting. She holds a book titled ''THE CRUEL DEATH OF A GUY WHO KEEPS BEGGING FOR BUZZ!'' with a black and white image of jenna_ortega''s face on the cover.The woman wears a white collar and a black jacket. Her hair is styled in two braids. She looks directly at the camera with a serious expression. Behind her is a wooden wall with intricate carvings and a framed portrait depicting the frankenstein monster. The color palette consists of dark browns, blacks, and whites, creating a somber and reflective atmosphere.'
output:
url: >-
32806244.jpeg
- text: 'A close-up portrait of jenna_ortega lying on a bed. She has long, dark hair and wears a white lace-up top. Her makeup is subtle, emphasizing her eyes and lips. The background is blurred, focusing attention on the subject. The color palette is soft and muted, with the woman''s skin and hair contrasting against the neutral background.'
output:
url: >-
32806266.jpeg
- text: 'A portrait of jenna_ortega with striking red hair, captured in a side profile. She wears a black, dotted blouse with a high neckline and long sleeves. Her pose is relaxed; one hand rests on her neck, and the other touches her face. The background is a chain-link fence with a geometric pattern, illuminated by warm, ambient light. The color palette consists primarily of dark tones, with the red hair contrasting against the lighter background. The image conveys a mood of contemplation and introspection.'
output:
url: >-
32806269.jpeg
- text: 'A portrait of jenna_ortega seated on a burgundy leather couch in an indoor setting. She wears a ribbed turtleneck sweater and blue jeans. Her black hair
is styled in loose waves, and she has a neutral expression. The background is blurred, but it appears to be a cafe or restaurant with warm, ambient lighting. The image style is candid and natural, capturing a serene moment.'
output:
url: >-
32806276.jpeg
- text: 'A portrait of jenna_ortega seated on a pilates reformer in a gym. She wears a black tank top and black leggings, with a black wristband on her left wrist. Her right hand rests on her head, and her gaze is directed to the side. The background shows gym equipment and a window with vertical blinds. Soft lighting illuminates the scene, creating a calm and focused atmosphere.'
output:
url: >-
32806281.jpeg
- text: 'A portrait of jenna_ortega in a traditional Japanese kimono, standing in a field of tall grasses. She holds a samurai sword in her right hand and her left hand rests on her hip. Her hair is blowing in the wind, and her gaze is directed away from the camera. The background shows a mountain range under a partly cloudy sky, with orange and yellow leaves floating in the air. The color palette is dominated by earthy tones, with the woman''s dark kimona contrasting against the golden-brown grasses and the muted blue sky.'
output:
url: >-
32806282.jpeg
- text: 'A portrait of jenna_ortega in a dark, Victorian-era setting. She wears a black dress with a white collar and holds a black balloon with white text reading "Every Day is Wednesday. " The woman has braided hair and stares directly at the camera with a serious expression. The background features a wooden paneled wall with a framed photograph of a group of people. The color palette consists primarily of dark browns, blacks, and whites, creating a somber and reflective atmosphere.'
output:
url: >-
32806249.jpeg
- text: 'A portrait of jenna_ortega with long, wavy black hair
, captured in a candid moment. She wears a white, long-sleeved blouse and smiles warmly at the camera. The background is softly blurred, highlighting the woman and the blossoms. The blossoms are in full bloom, with delicate pink and white petals contrasting against the green foliage. The image conveys a serene and natural mood, with the woman appearing relaxed and content.'
output:
url: >-
32806283.jpeg
- text: 'A portrait of jenna_ortega in a forest. She is captured in a side profile, her face partially hidden by her long, flowing hair. Her eyes are a striking blue, and she wears a red, ribbed sweater. The woman''s pose is relaxed, with one hand gently touching her hair. The background is a dense forest with tall, slender trees, and the ground is covered in autumnal red leaves. The image has a warm, earthy color palette, with the red of the woman''s sweater contrasting against the green of the trees and foliage.'
output:
url: >-
32806285.jpeg
- text: 'A vivid portrayal of jenna_ortega playing a double bass. She has dark hair styled in two braids and wears a dark, textured jacket. The woman''s focused expression and intense gaze indicate concentration on her performance. The double bass is the main subject, with its sleek black body contrasting sharply with the vibrant colors of the stained glass behind her. The stained glass window behind her displays a rainbow of colors, including red, orange, yellow, green, blue, and purple. The background is a blend of architectural elements, including a large, ornate window with intricate designs. The image style is dramatic and evocative, emphasizing the interplay of light and shadow.'
output:
url: >-
32806295.jpeg
- text: 'A portrait of jenna_ortega with long, wavy black hair
, captured in a side profile. She leans against a palm tree, her hand gently touching the leaves. The woman wears a black top and her nails are painted a vibrant red. The background is a serene beach scene with palm trees and a clear blue sky. The image has a soft, dreamy color palette of earthy tones, with the woman''s black hair
contrasting against the lighter background.'
output:
url: >-
32806292.jpeg
- text: 'A portrait of jenna_ortega leaning against a brick wall. She wears a white tank top and blue jeans. Her curly black hair
is styled in loose waves, and she has a relaxed pose with one hand resting on her chin. The brick wall is weathered with red and brown bricks. The image is candid and natural, capturing a moment of stillness.'
output:
url: >-
32806286.jpeg
- text: 'A detailed portrayal of jenna_ortega warrior in ornate silver and gold armor, standing in a grand, gothic-style cathedral. She holds a long, ornate sword with a golden hilt and a red gem at the top. Her attire is intricately designed with gold and silver patterns and embellishments, including a prominent chest plate featuring a blue gem. Her hair is styled in loose waves, and she has a determined expression. The background is blurred, highlighting the warrior as the main subject, and the cathedral''s architecture is visible behind her. The color palette is dominated by gold, silver, and white, creating a regal and majestic atmosphere.'
output:
url: >-
32806519.jpeg
---
# Jenna Ortega FLUX
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>Jenna Marie Ortega is an American actress who rose to prominence for her portrayal of Wednesday Addams in the Netflix horror comedy series 'Wednesday'. She has also starred in the slasher films 'Scream', 'X', and 'Scream VI', as well as in the fantasy film 'Beetlejuice Beetlejuice'..</p>
## Trigger words
You should use `jenna_ortega` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/jenna-ortega-flux/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/jenna-ortega-flux', weight_name='JennaOrtega_F1D.safetensors')
image = pipeline('A detailed portrayal of jenna_ortega warrior in ornate silver and gold armor, standing in a grand, gothic-style cathedral. She holds a long, ornate sword with a golden hilt and a red gem at the top. Her attire is intricately designed with gold and silver patterns and embellishments, including a prominent chest plate featuring a blue gem. Her hair is styled in loose waves, and she has a determined expression. The background is blurred, highlighting the warrior as the main subject, and the cathedral's architecture is visible behind her. The color palette is dominated by gold, silver, and white, creating a regal and majestic atmosphere.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Keltezaa/katy-perry-flux | Keltezaa | 2024-10-25T10:32:52Z | 123 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"singer",
"photorealistic",
"sexy",
"woman",
"celebrity",
"girls",
"realistic",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T10:13:46Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Sell&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- singer
- photorealistic
- sexy
- woman
- celebrity
- girls
- realistic
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
widget:
- text: ' This is a high resolution image of a women her facial features are sharp and defined. She has prominent cheekbones, including dark eyeshadow and a deep red lipstick., smoky eye makeup. bright blue eyes. She has long, dark brown hair cascading over her shoulders and a light olive skin tone. Wearing a long dress, standing in a cafe'
output:
url: >-
33711437.jpeg
- text: ' This is a high resolution image of a women her facial features are sharp and defined. She has prominent cheekbones, including dark eyeshadow and a deep red lipstick., smoky eye makeup. bright blue eyes. She has long, dark brown hair cascading over her shoulders and a light olive skin tone. Wearing a long dress, standing in a cafe, smiling'
output:
url: >-
33711452.jpeg
- text: ' beautiful detailed photograph, wearing a dress, long straight hair, standing in a jungle, wearing an explorer''s outfit'
output:
url: >-
33711512.jpeg
- text: ' beautiful detailed photograph, wearing a dress, long straight hair, standing in a spaceship, wearing a sci-fi space suit'
output:
url: >-
33711603.jpeg
- text: ' beautiful detailed photograph, wearing a dress, long straight hair, standing in a spaceship, wearing a sci-fi space suit'
output:
url: >-
33711852.jpeg
- text: ' beautiful detailed photograph, wearing a dress, short hair curled at the bottom, standing in a joyful forest clearing. dressed as snow white, closeup shot'
output:
url: >-
33711732.jpeg
- text: ' beautiful detailed photograph, wearing a dress, short hair curled at the bottom, standing in a joyful forest clearing. dressed as snow white, closeup shot'
output:
url: >-
33711748.jpeg
- text: ' beautiful detailed photograph, wearing a dress, short hair curled at the bottom, standing in a joyful forest clearing. dressed as snow white, closeup shot'
output:
url: >-
33711768.jpeg
---
# Katy Perry (Flux)
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<p>Katy Perry -Trained for Flux</p>
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/katy-perry-flux/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/katy-perry-flux', weight_name='Katy_Perry_Flux.safetensors')
image = pipeline(' beautiful detailed photograph, wearing a dress, short hair curled at the bottom, standing in a joyful forest clearing. dressed as snow white, closeup shot').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Keltezaa/flux-kaley-cuoco | Keltezaa | 2024-10-25T10:32:49Z | 102 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"photorealistic",
"woman",
"celebrity",
"girls",
"realistic",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T10:13:40Z | ---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- photorealistic
- woman
- celebrity
- girls
- realistic
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
widget:
- text: ' '
output:
url: >-
33668647.jpeg
- text: ' '
output:
url: >-
33669114.jpeg
- text: ' '
output:
url: >-
33668936.jpeg
- text: ' '
output:
url: >-
33669368.jpeg
- text: 'realistic portrait of a blonde woman wearing a dragonball cosplay in a cyberpunk city
'
output:
url: >-
33668482.jpeg
- text: 'realistic portrait of a blonde woman holding a light saber near her face'
output:
url: >-
33668480.jpeg
- text: 'realistic portrait of a black hair woman with pale skin in a cyberpunk setting at night, front view'
output:
url: >-
33668478.jpeg
- text: 'cgi render of a black hair woman with pale skin in a cyberpunk setting , front view'
output:
url: >-
33668486.jpeg
---
# FLUX - Kaley Cuoco
<Gallery />
([CivitAI](https://civitai.com/models/))
## Model description
<h3 id="flux-kaley-cuocowant-a-customprivate-lora-get-it-here-:-ko-fi-commissionprompts-in-showcase-imagesenjoy!leave-me-a-review-so-it-can-improve!-w4nnet9lu"><strong><span style="color:#40c057">FLUX - Kaley Cuoco</span></strong><br /><br /><strong><span style="color:rgb(121, 80, 242)">Want a Custom/private LoRA? </span><span style="color:rgb(21, 170, 191)">Get it here : </span></strong><a target="_blank" rel="ugc" href="https://ko-fi.com/c/2042ce3d32"><strong><span style="color:rgb(253, 126, 20)">Ko-Fi Commission</span></strong></a><span style="color:rgb(76, 110, 245)"><br /></span><br /><strong>Prompts in showcase images</strong><br /><br /><strong><span style="color:rgb(64, 192, 87)">Enjoy!</span></strong><br /><br /><strong>Leave me a review so it can improve!</strong></h3>
## Download model
Weights for this model are available in Safetensors format.
[Download](/Keltezaa/flux-kaley-cuoco/tree/main) them in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to(device)
pipeline.load_lora_weights('Keltezaa/flux-kaley-cuoco', weight_name='Flux.Cuoco-step00000450.safetensors')
image = pipeline('cgi render of a black hair woman with pale skin in a cyberpunk setting , front view').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
vcxa/varco_sft_dpo | vcxa | 2024-10-25T10:30:41Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:NCSOFT/Llama-VARCO-8B-Instruct",
"base_model:finetune:NCSOFT/Llama-VARCO-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T09:53:37Z | ---
base_model: NCSOFT/Llama-VARCO-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** vcxa
- **License:** apache-2.0
- **Finetuned from model :** NCSOFT/Llama-VARCO-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Pclanglais/Headlines-OCR-Correction | Pclanglais | 2024-10-25T10:13:11Z | 21 | 0 | null | [
"safetensors",
"llama",
"fr",
"en",
"de",
"es",
"it",
"license:apache-2.0",
"region:us"
] | null | 2024-10-25T09:46:02Z | ---
license: apache-2.0
language:
- fr
- en
- de
- es
- it
---
**Headlines-OCR-Correction** is a model for the the correction of OCR errors and the standardization of French news headlines.
## Usage
Headlines-OCR-Correction use a custom instruction structure: "### Text ###\n[text]\n\n### Correction ###\n" and a custom eos #END#.
Typical usage with vllm:
```python
sampling_params = SamplingParams(temperature=0.9, top_p=.95, max_tokens=4000, presence_penalty = 0, stop=["#END#"])
prompt = "### Text ###\n" + user_input + "\n\n### Correction ###\n"
outputs = llm.generate(prompts, sampling_params, use_tqdm = False)
``` |
prithivMLmods/Castor-3D-Sketchfab-Flux-LoRA | prithivMLmods | 2024-10-25T09:57:35Z | 914 | 19 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-10-25T09:42:29Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '3D Sketchfab, A low-angle view of a brown military tank on a bright yellow background. The tank has a red number "88" on the side of it. There is a black X on the top of the tank. There are two black wheels on the bottom of it and a black stripe on the right side of the front of the left side of this tank.'
output:
url: images/S1.png
- text: '3D Sketchfab, An eye-level view of a small red building with a green sign on the front of it. The building is surrounded by a small patch of brown dirt. There is a brown roof on the right side of the building and a red door on the left side. There are small green plants on the ground around the building. A black telephone pole is in the center of the image. A white dish is on the roof of the house.'
output:
url: images/S2.png
- text: '3D Sketchfab, a vibrant orange box is adorned with a fish sculpture. The fish sculpture is a vibrant shade of blue, with a striped pattern on its body. It is positioned on a light purple background, with the fish head facing towards the right side of the frame. A knife with a black handle is positioned to the left of the fish sculpture, positioned on top of the box.'
output:
url: images/S3.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: 3D Sketchfab
license: creativeml-openrail-m
---
# Castor-3D-Sketchfab-Flux-LoRA
<Gallery />
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.**
## Model description
**prithivMLmods/Castor-3D-Sketchfab-Flux-LoRA**
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 23 & 1.8k |
| Epoch | 10 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 39
## Setting Up
```
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "prithivMLmods/Castor-3D-Sketchfab-Flux-LoRA"
trigger_word = "3D Sketchfab" # Leave trigger_word blank if not used.
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## App File Structure
/project-root/
βββ .gitattributes
βββ README.md
βββ app.py
βββ pythonproject.py
## Trigger words π§¨
You should use `3D Sketchfab` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/Castor-3D-Sketchfab-Flux-LoRA/tree/main) them in the Files & versions tab.
|
eddme/mistral-V0.2-finetuned | eddme | 2024-10-25T09:50:14Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T09:44:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
linoyts/yart_art_sd3-5_lora-30-37 | linoyts | 2024-10-25T09:49:13Z | 5 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3.5-large",
"sd3.5",
"sd3.5-diffusers",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"license:other",
"region:us"
] | text-to-image | 2024-10-25T09:41:14Z | ---
base_model: stabilityai/stable-diffusion-3.5-large
library_name: diffusers
license: other
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3.5-large
- sd3.5
- sd3.5-diffusers
instance_prompt: Frog, yarn art style
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3.5-Large DreamBooth LoRA - linoyts/yart_art_sd3-5_lora-30-37
<Gallery />
## Model description
These are linoyts/yart_art_sd3-5_lora-30-37 DreamBooth LoRA weights for stabilityai/stable-diffusion-3.5-large.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `Frog, yarn art style` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](linoyts/yart_art_sd3-5_lora-30-37/tree/main) in the Files & versions tab.
## Use it with the [𧨠diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3.5-large, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('linoyts/yart_art_sd3-5_lora-30-37', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('Frog, yarn art style').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here πΎ](/linoyts/yart_art_sd3-5_lora-30-37/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
eskayML/electra_interview_duplicated | eskayML | 2024-10-25T09:48:38Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"base_model:mrm8488/electra-small-finetuned-squadv2",
"base_model:finetune:mrm8488/electra-small-finetuned-squadv2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-25T09:48:35Z | ---
library_name: transformers
license: apache-2.0
base_model: mrm8488/electra-small-finetuned-squadv2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: electra_interview_duplicated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra_interview_duplicated
This model is a fine-tuned version of [mrm8488/electra-small-finetuned-squadv2](https://huggingface.co/mrm8488/electra-small-finetuned-squadv2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2204
- Accuracy: 0.3581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.5854 | 1.0 | 2121 | 2.5418 | 0.2014 |
| 2.4462 | 2.0 | 4242 | 2.4187 | 0.2898 |
| 2.3656 | 3.0 | 6363 | 2.3080 | 0.3204 |
| 2.2985 | 4.0 | 8484 | 2.2448 | 0.3298 |
| 2.2479 | 5.0 | 10605 | 2.2204 | 0.3581 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
eskayML/bert_interview_duplicated | eskayML | 2024-10-25T09:48:34Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-25T09:16:53Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_interview_duplicated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_interview_duplicated
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6667
- Accuracy: 0.4523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.2802 | 1.0 | 2121 | 2.0371 | 0.3804 |
| 2.0794 | 2.0 | 4242 | 1.8813 | 0.4158 |
| 1.9467 | 3.0 | 6363 | 1.7781 | 0.4311 |
| 1.8672 | 4.0 | 8484 | 1.7044 | 0.4429 |
| 1.7766 | 5.0 | 10605 | 1.6667 | 0.4523 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
AlessandroMinervini/llama_factory_first_model | AlessandroMinervini | 2024-10-25T09:44:27Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T09:37:46Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nhatminh/bge-finetune | nhatminh | 2024-10-25T09:44:02Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-10-25T09:40:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mukhtar/whisper-V3-CV17-dev-1EP | mukhtar | 2024-10-25T09:34:36Z | 63 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ar",
"dataset:mozilla-foundation/common_voice_19_0",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-25T09:31:52Z | ---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_19_0
model-index:
- name: Whisper V3 CV19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper V3 CV19
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common Voice 19.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
keles/t2_test | keles | 2024-10-25T09:34:34Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-25T09:19:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gokaygokay/Flux-White-Background-LoRA | gokaygokay | 2024-10-25T09:32:20Z | 240 | 43 | diffusers | [
"diffusers",
"lora",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-09-25T21:06:01Z | ---
license: apache-2.0
language:
- en
base_model:
- black-forest-labs/FLUX.1-dev
library_name: diffusers
tags:
- lora
- diffusers
pipeline_tag: text-to-image
widget:
- text: grid of 4 3D game assets, pixar, in the middle, white background
output:
url: images/example_sdhlx557e.png
- text: dog, in the middle, white background
output:
url: images/example_mtojzmerf.png
- text: two warriors fighting, in the middle, white background
output:
url: images/example_dfmvqtj6h.png
- text: rome colosseum, in the middle, white background
output:
url: images/example_so38781dg.png
- text: tea cup, in the middle, white background
output:
url: images/example_tj600l1in.png
- text: hat, in the middle, white background
output:
url: images/example_oimo0m0iw.png
---
### Usage
```your prompt + , in the middle ,white background``` for best results
This LoRA is trained with [FAL Fast LoRA Trainer](https://fal.ai/models/fal-ai/flux-lora-fast-training) .
<Gallery />
|
paruwka/lzh | paruwka | 2024-10-25T09:27:24Z | 179 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-10-23T23:21:30Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: lzh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lzh
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3581
- Accuracy: 0.8850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6577 | 1.0 | 125 | 0.5158 | 0.8432 |
| 0.4161 | 2.0 | 250 | 0.3891 | 0.8660 |
| 0.2767 | 3.0 | 375 | 0.3581 | 0.8850 |
### Framework versions
- Transformers 4.44.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
prithivMLmods/Canopus-LoRA-Flux-Anime | prithivMLmods | 2024-10-25T09:08:43Z | 589 | 29 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"Flux",
"FluxDev",
"Anime",
"Euler",
"AdamW8bit",
"Realistic-Anime",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-09-08T07:19:01Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- Flux
- FluxDev
- Anime
- Euler
- AdamW8bit
- Realistic-Anime
widget:
- text: >-
Anime ((masterpiece,best quality, detailed)), outdoor,wind_lift, souryuu
asuka langley, interface headset, red bodysuit, (realistic:1.3)
output:
url: images/AAAA.png
- text: >-
An anime girl in a blue dress and straw hat, with long black hair and
flowing curly bangs, in the style of anime, against a background of a
coastal street by the sea, on a bright sunny day, with flowers on a
windowsill, with a cheerful expression, with detailed design, with a
watercolor painting effect, and vibrant colors, Hayao Miyazakis manga, with
high resolution and clear details --ar 1:2 --stylize 750 --v 6.1
output:
url: images/BBBB.png
- text: >-
Anime theme masterpiece,best quality,1girl,solo,looking at viewer, fur
(clothing), black hair, black legwear,(electric guitar:1.4), reflection,
splash, droplets, rust, sparks, asphalt, ground vehicle, sports car, super
car, mechanical,burning, playing instrument, livestream.
output:
url: images/CCCC.png
- text: Anime girl, swimsuit, red eyes, brown hair
output:
url: images/4.png
- text: >-
Well-fitting man equipped with hoodie and cap hiding upper face in anime
manga style
output:
url: images/7.png
- text: >-
Pretty cyborg lady, lots of details, sakura flowers, fine art, futuristic
setting
output:
url: images/6.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: creativeml-openrail-m
---
# Canopus-LoRA-Flux-Anime
<Gallery />
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.**
## Model description
**prithivMLmods/Canopus-LoRA-Flux-Anime**
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW8bit | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 27 & 3.5K+ |
| Epoch | 17 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 220+ [ Hi-RES ]
## Image Sources : Direct Links [i]
| Source Name | URL | License |
|-----------------|--------------------------------------------------------------------|--------------------------|
| 4K Wallpapers | [4kwallpapers](https://4kwallpapers.com/anime/) | Visit website for details|
| Pixabay | [Pixabay](https://pixabay.com/images/search/anime%20wallpaper/) | Free for commercial use |
| Freepik | [Freepik](https://www.freepik.com/free-photos-vectors/anime-wallpaper) | Free with attribution |
Please make sure to follow the licensing terms of each source when using these images in your project.
π **Scroll Down to the end for Sample Generations β¬οΈ.**
## Setting Up
```
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "prithivMLmods/Canopus-Anime-Art-Flux-LoRA"
trigger_word = "Anime"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger prompts
Anime ((masterpiece,best quality, detailed)), outdoor,wind_lift, souryuu asuka langley, interface headset, red bodysuit, (realistic:1.3)
An anime girl in a blue dress and straw hat, with long black hair and flowing curly bangs, in the style of anime, against a background of a coastal street by the sea, on a bright sunny day, with flowers on a windowsill, with a cheerful expression, with detailed design, with a watercolor painting effect, and vibrant colors, Hayao Miyazaki's manga, with high resolution and clear details --ar 1:2 --stylize 750 --v 6.1
Anime theme masterpiece,best quality,1girl,solo,looking at viewer, fur (clothing), black hair, black legwear,(electric guitar:1.4), reflection, splash, droplets, rust, sparks, asphalt, ground vehicle, sports car, super car, mechanical,burning, playing instrument, livestream.
| Parameter | Value |
|-----------------|---------------------------------------------------------------------------------------|
| Prompt | Anime ((masterpiece,best quality, detailed)), outdoor,wind_lift, souryuu asuka langley, interface headset, red bodysuit, (realistic:1.3) |
| Sampler | euler |
## Trigger words
You should use `Anime` to trigger the image generation.
## Sample Generation Using Canopus-Anime-Art-Flux-LoRA
## Image Gallery
|  |  |  |
|--------------------------|--------------------------|--------------------------|
|  |  |  |
|  |  |  |
|  |  |  |
## App File Structure
/project-root/
βββ .gitattributes
βββ README.md
βββ app.py
βββ pythonproject.py
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/Canopus-Anime-Art-FluxDev-LoRA/tree/main) them in the Files & versions tab.
.
.π€: https://hf.co/prithivmlmods |
dyyyyyyyy/Llama3-8B-ScaleQuest | dyyyyyyyy | 2024-10-25T09:08:28Z | 15 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:dyyyyyyyy/ScaleQuest-Math",
"arxiv:2410.18693",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-12T13:32:17Z | ---
license: apache-2.0
datasets:
- dyyyyyyyy/ScaleQuest-Math
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
---
<p align="center"><h2 align="center">Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch</h2></p>
# Model Card for Llama3-8B-ScaleQuest
<!-- Provide a quick summary of what the model is/does. -->
We introduce ScaleQuest, a scalable and novel data synthesis method that utilizes small-size open-source models to generate questions from scratch without the need for seed data with complex augmentation constraints.
* π Project Page: [https://scalequest.github.io](https://scalequest.github.io/)
* π» Code: [https://github.com/yyDing1/ScaleQuest](https://github.com/yyDing1/ScaleQuest/)
* π Paper: [Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch](https://arxiv.org/abs/2410.18693)
* πΎ Models in the π€ HuggingFace Hub: [ScaleQuest-Models](https://huggingface.co/collections/dyyyyyyyy/scalequest-670a7dc2623c91990f28913b)
<p align="center">
<img src="https://github.com/yyDing1/ScaleQuest/raw/main/img/results.png">
</p>
## Datasets & Models
Math Dataset: [link](https://huggingface.co/datasets/dyyyyyyyy/ScaleQuest-Math)
We release two question generator models and four problem-solving models.
| Model | Type | MATH | Olympiad Bench | π€ HuggingFace<br />Download Link |
| - | :-: | :-: | :-: | :-: |
| ScaleQuest-DeepSeekMath-7B-QGen | question generator | - | - | [link](https://huggingface.co/dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen)
| ScaleQuest-Qwen2-Math-7B-QGen | question generator | - | - | [link](https://huggingface.co/dyyyyyyyy/ScaleQuest-Qwen2-Math-7B-QGen)
| Mistral-7B-ScaleQuest | problem solver | 62.9 | 26.8 | [link](https://huggingface.co/dyyyyyyyy/Mistral-7B-ScaleQuest) |
| Llama3-8B-ScaleQuest | problem solver | 64.4 | 25.3 | [link](https://huggingface.co/dyyyyyyyy/Llama3-8B-ScaleQuest) |
| DeepSeekMath-7B-ScaleQuest | problem solver | 66.6 | 29.9 | [link](https://huggingface.co/dyyyyyyyy/DeepSeekMath-7B-ScaleQuest) |
| Qwen2-Math-7B-ScaleQuest | problem solver | 73.4 | 38.5 | [link](https://huggingface.co/dyyyyyyyy/Qwen2-Math-7B-ScaleQuest) |
## Demo usage
Below is an example using `Llama3-8B-ScaleQuest`
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "dyyyyyyyy/Llama3-8B-ScaleQuest"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
question = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$."
sys_prompt = "Below is an instruction that describes a task. Write a response that appropriately completes the request." + "\n\n"
query_prompt = "### Instruction:" + "\n"
# {query}
prompt_after_query = "\n\n"
resp_prompt = "### Response:" + "\n"
prompt_before_resp = ""
# {resp}
delim = "\n\n"
prefix_prompt = f"{query_prompt}{question}{prompt_after_query}{resp_prompt}{prompt_before_resp}".rstrip(" ")
full_prompt = sys_prompt + delim.join([prefix_prompt])
# print(full_prompt)
inputs = tokenizer(full_prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0][len(inputs.input_ids[0]):], skip_special_tokens=True))
```
## Citation
```bibtex
@article{ding2024unleashing,
title={Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch},
author={Ding, Yuyang and Shi, Xinyu and Liang, Xiaobo and Li, Juntao and Zhu, Qiaoming and Zhang, Min},
journal={https://arxiv.org/abs/2410.18693},
year={2024}
}
``` |
dyyyyyyyy/DeepSeekMath-7B-ScaleQuest | dyyyyyyyy | 2024-10-25T09:07:59Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:dyyyyyyyy/ScaleQuest-Math",
"arxiv:2410.18693",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-12T13:40:51Z | ---
license: apache-2.0
datasets:
- dyyyyyyyy/ScaleQuest-Math
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
---
<p align="center"><h2 align="center">Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch</h2></p>
# Model Card for DeepSeekMath-7B-ScaleQuest
<!-- Provide a quick summary of what the model is/does. -->
We introduce ScaleQuest, a scalable and novel data synthesis method that utilizes small-size open-source models to generate questions from scratch without the need for seed data with complex augmentation constraints.
* π Project Page: [https://scalequest.github.io](https://scalequest.github.io/)
* π» Code: [https://github.com/yyDing1/ScaleQuest](https://github.com/yyDing1/ScaleQuest/)
* π Paper: [Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch](https://arxiv.org/abs/2410.18693)
* πΎ Models in the π€ HuggingFace Hub: [ScaleQuest-Models](https://huggingface.co/collections/dyyyyyyyy/scalequest-670a7dc2623c91990f28913b)
<p align="center">
<img src="https://github.com/yyDing1/ScaleQuest/raw/main/img/results.png">
</p>
## Datasets & Models
Math Dataset: [link](https://huggingface.co/datasets/dyyyyyyyy/ScaleQuest-Math)
We release two question generator models and four problem-solving models.
| Model | Type | MATH | Olympiad Bench | π€ HuggingFace<br />Download Link |
| - | :-: | :-: | :-: | :-: |
| ScaleQuest-DeepSeekMath-7B-QGen | question generator | - | - | [link](https://huggingface.co/dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen)
| ScaleQuest-Qwen2-Math-7B-QGen | question generator | - | - | [link](https://huggingface.co/dyyyyyyyy/ScaleQuest-Qwen2-Math-7B-QGen)
| Mistral-7B-ScaleQuest | problem solver | 62.9 | 26.8 | [link](https://huggingface.co/dyyyyyyyy/Mistral-7B-ScaleQuest) |
| Llama3-8B-ScaleQuest | problem solver | 64.4 | 25.3 | [link](https://huggingface.co/dyyyyyyyy/Llama3-8B-ScaleQuest) |
| DeepSeekMath-7B-ScaleQuest | problem solver | 66.6 | 29.9 | [link](https://huggingface.co/dyyyyyyyy/DeepSeekMath-7B-ScaleQuest) |
| Qwen2-Math-7B-ScaleQuest | problem solver | 73.4 | 38.5 | [link](https://huggingface.co/dyyyyyyyy/Qwen2-Math-7B-ScaleQuest) |
## Demo usage
Below is an example using `DeepSeekMath-7B-ScaleQuest`
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "dyyyyyyyy/DeepSeekMath-7B-ScaleQuest"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
question = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$."
sys_prompt = ""
query_prompt = "User:" + " "
# {query}
prompt_after_query = "\n" + "Please reason step by step, and put your final answer within \\boxed{}." + "\n\n"
resp_prompt = "Assistant:" + " "
prompt_before_resp = ""
# {resp}
delim = "<ο½endβofβsentenceο½>"
prefix_prompt = f"{query_prompt}{question}{prompt_after_query}{resp_prompt}{prompt_before_resp}".rstrip(" ")
full_prompt = sys_prompt + delim.join([prefix_prompt])
# print(full_prompt)
inputs = tokenizer(full_prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0][len(inputs.input_ids[0]):], skip_special_tokens=True))
```
## Citation
```bibtex
@article{ding2024unleashing,
title={Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch},
author={Ding, Yuyang and Shi, Xinyu and Liang, Xiaobo and Li, Juntao and Zhu, Qiaoming and Zhang, Min},
journal={https://arxiv.org/abs/2410.18693},
year={2024}
}
``` |
Subsets and Splits