modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
RESMPDEV/Llama-3-8B-Instruct-Wrath | RESMPDEV | 2024-08-20T17:17:47Z | 7 | 0 | null | [
"safetensors",
"llama",
"en",
"license:llama3",
"region:us"
] | null | 2024-08-20T10:36:01Z | ---
license: llama3
language:
- en
---
# Llama-3-8B-Instruct-Wrath
This is a line of experimental models with zero finetuning or post training based on my interpreation of the Latin interpretation of the seven deadly sins.
Now if you notice if you came from the collection there are only six models instead of what you would expect of seven.
The misssing model in thise case is greed.
How these models were made was with a very slightly modified notebook by [Failspy](https://huggingface.co/failspy), his and great work with the ablation [library](https://github.com/FailSpy/abliterator), and inspired by the [Mopey Mule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) model.
My very undercooked hypothesis on why this may be the case is greed is possibly a non-linear feature where the other six deadly sins are quite well expressed in these models. |
STomoya/vit_base_patch16_224.st_mae_sb1k | STomoya | 2024-08-20T17:11:01Z | 18 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"license:apache-2.0",
"region:us"
] | image-classification | 2024-08-20T17:08:20Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
---
# Model card for vit_base_patch16_224.st_mae_sb1k
## Model Details
- **MSE:** 0.5783191919326782
|
BoghdadyJR/tmp | BoghdadyJR | 2024-08-20T17:08:34Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"opt",
"trl",
"sft",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"region:us"
] | null | 2024-08-20T17:07:53Z | ---
license: other
base_model: facebook/opt-350m
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: tmp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/boghdady95/huggingface/runs/3xawbmyg)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/boghdady95/huggingface/runs/3xawbmyg)
# tmp
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Khan1502/AK_Lora | Khan1502 | 2024-08-20T17:07:14Z | 5 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-08-20T16:15:16Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
instance_prompt: ASIF
---
# Asif_Lora_Flux
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ASIF` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Khan1502/ASIF_LoRa_Flux', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ljb0967/marian-finetuned-kde4-en-to-fr | ljb0967 | 2024-08-20T16:50:13Z | 5 | 1 | null | [
"tensorboard",
"safetensors",
"marian",
"tanslation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-08-20T11:02:27Z | ---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- tanslation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.91210143343284
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8554
- Bleu: 52.9121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
Disra/lora-anime-test-02 | Disra | 2024-08-20T16:42:17Z | 216 | 2 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"simpletuner",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-08-19T15:17:53Z | ---
license: creativeml-openrail-m
base_model: "black-forest-labs/FLUX.1-dev"
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- simpletuner
- lora
- template:sd-lora
inference: true
widget:
- text: 'unconditional (blank prompt)'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_0_0.png
- text: 'anime style digital art of a girl with blue-green hair and green eyes wearing a one piece swimsuit'
parameters:
negative_prompt: ''''
output:
url: ./assets/image_1_0.png
---
# lora-anime-test-02
This is a LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
The main validation prompt used during training was:
```
anime style digital art of a girl with blue-green hair and green eyes wearing a one piece swimsuit
```
# Example Images
Base flux - no lora - are on top, with the lora are on the bottom ( same promt and seed )




## Validation settings
- CFG: `3.5`
- CFG Rescale: `0.0`
- Steps: `20`
- Sampler: `None`
- Seed: `42`
- Resolution: `1024`
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 142
- Training steps: 5000
- Learning rate: 0.0001
- Effective batch size: 1
- Micro-batch size: 1
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Prediction type: flow-matching
- Rescaled betas zero SNR: False
- Optimizer: adamw_bf16
- Precision: bf16
- Quantised: Yes: int8-quanto
- Xformers: Not used
- LoRA Rank: 4
- LoRA Alpha: None
- LoRA Dropout: 0.1
- LoRA initialisation style: default
## Datasets
### anime-test-01
- Repeats: 0
- Total number of images: 35
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: True
- Crop style: center
- Crop aspect: square
## Inference
```python
import torch
from diffusers import DiffusionPipeline
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'Disra/lora-anime-test-02'
pipeline = DiffusionPipeline.from_pretrained(model_id)
pipeline.load_lora_weights(adapter_id)
prompt = "anime style digital art of a girl with blue-green hair and green eyes wearing a one piece swimsuit"
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
prompt=prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
width=1024,
height=1024,
guidance_scale=3.5,
).images[0]
image.save("output.png", format="PNG")
```
|
Sekais/zzz | Sekais | 2024-08-20T16:21:53Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-11T18:40:03Z | ---
license: creativeml-openrail-m
---
|
britojr/llama3.1-sec10q_v3 | britojr | 2024-08-20T16:11:10Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-08-20T15:25:17Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BogdanTurbal/model_bert_large_cased_d_political_bias_political_bias_ep_2_7_a_sqn_a_b_p_100_5_v_16 | BogdanTurbal | 2024-08-20T16:08:23Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:apache-2.0",
"region:us"
] | null | 2024-08-20T15:38:19Z | ---
license: apache-2.0
base_model: google-bert/bert-large-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_bert_large_cased_d_political_bias_political_bias_ep_2_7_a_sqn_a_b_p_100_5_v_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_bert_large_cased_d_political_bias_political_bias_ep_2_7_a_sqn_a_b_p_100_5_v_16
This model is a fine-tuned version of [google-bert/bert-large-cased](https://huggingface.co/google-bert/bert-large-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7786
- Accuracy: 0.6761
- F1 Micro: 0.6761
- Auc: 0.7483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.7069 | 1.0 | 73 | 0.6068 | 0.6658 | 0.6658 | 0.7319 |
| 0.4953 | 2.0 | 146 | 0.6024 | 0.6821 | 0.6821 | 0.7508 |
| 0.4319 | 3.0 | 219 | 0.6709 | 0.6624 | 0.6624 | 0.7227 |
| 0.3633 | 4.0 | 292 | 0.7182 | 0.6641 | 0.6641 | 0.7470 |
| 0.1728 | 5.0 | 365 | 0.7268 | 0.6821 | 0.6821 | 0.7496 |
| 0.1446 | 6.0 | 438 | 0.7643 | 0.6753 | 0.6753 | 0.7476 |
| 0.1266 | 7.0 | 511 | 0.7786 | 0.6761 | 0.6761 | 0.7483 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF | bartowski | 2024-08-20T16:05:40Z | 259 | 4 | null | [
"gguf",
"instruct",
"finetune",
"chatml",
"axolotl",
"roleplay",
"text-generation",
"en",
"base_model:Gryphe/Pantheon-RP-1.6-12b-Nemo",
"base_model:quantized:Gryphe/Pantheon-RP-1.6-12b-Nemo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-08-19T08:00:18Z | ---
base_model: Gryphe/Pantheon-RP-1.6-12b-Nemo
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- instruct
- finetune
- chatml
- axolotl
- roleplay
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of Pantheon-RP-1.6-12b-Nemo
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3600">b3600</a> for quantization.
Original model: https://huggingface.co/Gryphe/Pantheon-RP-1.6-12b-Nemo
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Torrent files
https://aitorrent.zerroug.de/bartowski-pantheon-rp-1-6-12b-nemo-gguf-torrent/
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [Pantheon-RP-1.6-12b-Nemo-bf16.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-bf16.gguf) | bf16 | 24.50GB | false | Full BF16 weights. |
| [Pantheon-RP-1.6-12b-Nemo-Q8_0.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q8_0.gguf) | Q8_0 | 13.02GB | false | Extremely high quality, generally unneeded but max available quant. |
| [Pantheon-RP-1.6-12b-Nemo-Q6_K_L.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q6_K_L.gguf) | Q6_K_L | 10.38GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [Pantheon-RP-1.6-12b-Nemo-Q6_K.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q6_K.gguf) | Q6_K | 10.06GB | false | Very high quality, near perfect, *recommended*. |
| [Pantheon-RP-1.6-12b-Nemo-Q5_K_L.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q5_K_L.gguf) | Q5_K_L | 9.14GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [Pantheon-RP-1.6-12b-Nemo-Q5_K_M.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q5_K_M.gguf) | Q5_K_M | 8.73GB | false | High quality, *recommended*. |
| [Pantheon-RP-1.6-12b-Nemo-Q5_K_S.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q5_K_S.gguf) | Q5_K_S | 8.52GB | false | High quality, *recommended*. |
| [Pantheon-RP-1.6-12b-Nemo-Q4_K_L.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q4_K_L.gguf) | Q4_K_L | 7.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [Pantheon-RP-1.6-12b-Nemo-Q4_K_M.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q4_K_M.gguf) | Q4_K_M | 7.48GB | false | Good quality, default size for must use cases, *recommended*. |
| [Pantheon-RP-1.6-12b-Nemo-Q3_K_XL.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q3_K_XL.gguf) | Q3_K_XL | 7.15GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [Pantheon-RP-1.6-12b-Nemo-Q4_K_S.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q4_K_S.gguf) | Q4_K_S | 7.12GB | false | Slightly lower quality with more space savings, *recommended*. |
| [Pantheon-RP-1.6-12b-Nemo-IQ4_XS.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-IQ4_XS.gguf) | IQ4_XS | 6.74GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Pantheon-RP-1.6-12b-Nemo-Q3_K_L.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q3_K_L.gguf) | Q3_K_L | 6.56GB | false | Lower quality but usable, good for low RAM availability. |
| [Pantheon-RP-1.6-12b-Nemo-Q3_K_M.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q3_K_M.gguf) | Q3_K_M | 6.08GB | false | Low quality. |
| [Pantheon-RP-1.6-12b-Nemo-IQ3_M.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-IQ3_M.gguf) | IQ3_M | 5.72GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Pantheon-RP-1.6-12b-Nemo-Q3_K_S.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q3_K_S.gguf) | Q3_K_S | 5.53GB | false | Low quality, not recommended. |
| [Pantheon-RP-1.6-12b-Nemo-Q2_K_L.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q2_K_L.gguf) | Q2_K_L | 5.45GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [Pantheon-RP-1.6-12b-Nemo-IQ3_XS.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-IQ3_XS.gguf) | IQ3_XS | 5.31GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Pantheon-RP-1.6-12b-Nemo-Q2_K.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-Q2_K.gguf) | Q2_K | 4.79GB | false | Very low quality but surprisingly usable. |
| [Pantheon-RP-1.6-12b-Nemo-IQ2_M.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF/blob/main/Pantheon-RP-1.6-12b-Nemo-IQ2_M.gguf) | IQ2_M | 4.44GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF --include "Pantheon-RP-1.6-12b-Nemo-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Pantheon-RP-1.6-12b-Nemo-GGUF --include "Pantheon-RP-1.6-12b-Nemo-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Pantheon-RP-1.6-12b-Nemo-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
tehreemfarooqi/whisper-medium-ur-cv-17 | tehreemfarooqi | 2024-08-20T16:04:35Z | 133 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-08-20T16:03:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rizgan/whisper-tiny-ckb | rizgan | 2024-08-20T16:04:14Z | 8 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"ckb",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-08-20T14:16:07Z | ---
language:
- ckb
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: "Whisper Tiny for Central Kurdish (Soran\xEE) - Rizgan Gerdenzer\xEE"
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: ckb
split: None
args: 'config: ckb, split: test'
metrics:
- name: Wer
type: wer
value: 58.49012852789188
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny for Central Kurdish (Soranรฎ) - Rizgan Gerdenzerรฎ
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3716
- Wer: 58.4901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3783 | 1.2330 | 1000 | 0.4454 | 66.8478 |
| 0.2803 | 2.4661 | 2000 | 0.3868 | 60.8122 |
| 0.2231 | 3.6991 | 3000 | 0.3738 | 58.7816 |
| 0.2016 | 4.9322 | 4000 | 0.3716 | 58.4901 |
### Framework versions
- Transformers 4.43.3
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
BogdanTurbal/model_roberta_large_d_hate_bias_political_bias_ep_2_7_a_sqn_a_b_p_100_5_v_16 | BogdanTurbal | 2024-08-20T16:02:29Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:BogdanTurbal/model_roberta_large_d_hate_bias_ep_2_sqn_a_p_100_v_16",
"base_model:finetune:BogdanTurbal/model_roberta_large_d_hate_bias_ep_2_sqn_a_p_100_v_16",
"license:mit",
"region:us"
] | null | 2024-08-20T15:28:46Z | ---
license: mit
base_model: BogdanTurbal/model_roberta_large_d_hate_bias_ep_2_sqn_a_p_100_v_16
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_roberta_large_d_hate_bias_political_bias_ep_2_7_a_sqn_a_b_p_100_5_v_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_roberta_large_d_hate_bias_political_bias_ep_2_7_a_sqn_a_b_p_100_5_v_16
This model is a fine-tuned version of [BogdanTurbal/model_roberta_large_d_hate_bias_ep_2_sqn_a_p_100_v_16](https://huggingface.co/BogdanTurbal/model_roberta_large_d_hate_bias_ep_2_sqn_a_p_100_v_16) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0548
- Accuracy: 0.7251
- F1 Micro: 0.7251
- Auc: 0.7987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.6058 | 1.0 | 73 | 0.6016 | 0.6830 | 0.6830 | 0.7480 |
| 0.4522 | 2.0 | 146 | 0.6185 | 0.7070 | 0.7070 | 0.7772 |
| 0.3725 | 3.0 | 219 | 0.6318 | 0.7113 | 0.7113 | 0.7909 |
| 0.289 | 4.0 | 292 | 0.6909 | 0.7139 | 0.7139 | 0.7970 |
| 0.1485 | 5.0 | 365 | 0.8417 | 0.7148 | 0.7148 | 0.7980 |
| 0.1506 | 6.0 | 438 | 1.0025 | 0.7182 | 0.7182 | 0.7991 |
| 0.2996 | 7.0 | 511 | 1.0548 | 0.7251 | 0.7251 | 0.7987 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
xiaohui2022/lion_ckpt | xiaohui2022 | 2024-08-20T15:59:52Z | 0 | 8 | null | [
"region:us"
] | null | 2022-12-28T18:30:05Z | - Check `Files and versions` for the checkpoints, samples and test data. |
royallab/MN-12B-LooseCannon-v2-exl2 | royallab | 2024-08-20T15:59:08Z | 44 | 3 | null | [
"en",
"region:us"
] | null | 2024-08-20T14:41:27Z | ---
language:
- en
---
## Information
This is a Exl2 quantized version of [MN-LooseCannon-12B-v2](https://huggingface.co/royallab/MN-LooseCannon-12B-v2)
Please refer to the original creator for more information.
Calibration dataset: Exl2 default
## Branches:
- main: Measurement files
- 4bpw: 4 bits per weight
- 5bpw: 5 bits per weight
- 6bpw: 6 bits per weight
## Notes
- 6bpw is recommended for the best quality to vram usage ratio (assuming you have enough vram).
- Quants greater than 6bpw will not be created because there is no improvement in using them. If you really want them, ask someone else or make them yourself.
## Download
With [async-hf-downloader](https://github.com/theroyallab/async-hf-downloader): A lightweight and asynchronous huggingface downloader created by me
```shell
./async-hf-downloader royallab/MN-LooseCannon-12B-v2-exl2 -r 6bpw -p MN-LooseCannon-12B-v2-exl2-6bpw
```
With HuggingFace hub (`pip install huggingface_hub`)
```shell
huggingface-cli download royallab/MN-LooseCannon-12B-v2-exl2 --revision 6bpw --local-dir MN-LooseCannon-12B-v2-exl2-6bpw
```
## Run in TabbyAPI
TabbyAPI is a pure exllamav2 FastAPI server developed by us. You can find TabbyAPI's source code here: [https://github.com/theroyallab/TabbyAPI](https://github.com/theroyallab/TabbyAPI)
1. Inside TabbyAPI's config.yml, set `model_name` to `MN-LooseCannon-12B-v2-exl2-6bpw`
1. You can also use an argument `--model_name MN-LooseCannon-12B-v2-exl2-6bpw` on startup or you can use the `/v2/model/load` endpoint
2. Launch TabbyAPI inside your python env by running `./start.bat` or `./start.sh`
## Donate?
All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/kingbri
You should not feel obligated to donate, but if you do, I'd appreciate it.
---
|
imdatta0/Mistral-7B-v0.3_metamath_ortho | imdatta0 | 2024-08-20T15:59:00Z | 23 | 0 | peft | [
"peft",
"safetensors",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2024-08-16T04:02:01Z | ---
base_model: unsloth/mistral-7b-v0.3
library_name: peft
license: apache-2.0
tags:
- unsloth
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.3_metamath_ortho
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.3_metamath_ortho
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8319
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7761 | 0.0211 | 13 | 0.8475 |
| 5.7285 | 0.0421 | 26 | 7.1242 |
| 6.6463 | 0.0632 | 39 | 6.4624 |
| 6.3183 | 0.0842 | 52 | 6.2700 |
| 6.3056 | 0.1053 | 65 | 6.3511 |
| 6.2849 | 0.1264 | 78 | 6.2801 |
| 6.2952 | 0.1474 | 91 | 6.3205 |
| 6.2939 | 0.1685 | 104 | 6.3566 |
| 6.2779 | 0.1896 | 117 | 6.2580 |
| 6.087 | 0.2106 | 130 | 5.9797 |
| 5.8495 | 0.2317 | 143 | 5.8683 |
| 5.6782 | 0.2527 | 156 | 5.5177 |
| 5.4335 | 0.2738 | 169 | 5.3885 |
| 5.4451 | 0.2949 | 182 | 5.7948 |
| 5.5833 | 0.3159 | 195 | 5.2887 |
| 5.2684 | 0.3370 | 208 | 5.3036 |
| 5.1159 | 0.3580 | 221 | 5.1110 |
| 5.0046 | 0.3791 | 234 | 4.9806 |
| 4.9134 | 0.4002 | 247 | 4.9382 |
| 4.9145 | 0.4212 | 260 | 4.9544 |
| 4.7976 | 0.4423 | 273 | 4.7954 |
| 4.7328 | 0.4633 | 286 | 4.6897 |
| 4.6799 | 0.4844 | 299 | 4.5793 |
| 4.5047 | 0.5055 | 312 | 4.6603 |
| 4.529 | 0.5265 | 325 | 4.4405 |
| 4.3835 | 0.5476 | 338 | 4.3916 |
| 4.4279 | 0.5687 | 351 | 4.2860 |
| 4.3177 | 0.5897 | 364 | 4.3171 |
| 4.39 | 0.6108 | 377 | 4.3272 |
| 4.3138 | 0.6318 | 390 | 4.3753 |
| 4.2269 | 0.6529 | 403 | 4.3339 |
| 4.1075 | 0.6740 | 416 | 4.1693 |
| 4.2285 | 0.6950 | 429 | 4.1187 |
| 4.1297 | 0.7161 | 442 | 4.1251 |
| 4.0021 | 0.7371 | 455 | 4.0365 |
| 4.0089 | 0.7582 | 468 | 4.0025 |
| 3.9458 | 0.7793 | 481 | 3.9924 |
| 3.9405 | 0.8003 | 494 | 3.9254 |
| 3.9594 | 0.8214 | 507 | 3.8890 |
| 3.9056 | 0.8424 | 520 | 3.8774 |
| 3.8639 | 0.8635 | 533 | 3.8758 |
| 3.8543 | 0.8846 | 546 | 3.8680 |
| 3.9097 | 0.9056 | 559 | 3.8502 |
| 3.8503 | 0.9267 | 572 | 3.8287 |
| 3.789 | 0.9478 | 585 | 3.8357 |
| 3.7923 | 0.9688 | 598 | 3.8299 |
| 3.8071 | 0.9899 | 611 | 3.8319 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
PrunaAI/ContactDoctor-Bio-Medical-Llama-3-8B-QUANTO-int4bit-smashed | PrunaAI | 2024-08-20T15:48:31Z | 5 | 1 | null | [
"pruna-ai",
"base_model:ContactDoctor/Bio-Medical-Llama-3-8B",
"base_model:finetune:ContactDoctor/Bio-Medical-Llama-3-8B",
"region:us"
] | null | 2024-08-20T15:41:05Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ContactDoctor/Bio-Medical-Llama-3-8B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ContactDoctor/Bio-Medical-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/ContactDoctor-Bio-Medical-Llama-3-8B-QUANTO-int4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ContactDoctor/Bio-Medical-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ContactDoctor/Bio-Medical-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
mradermacher/Qra-13b-instruct-i1-GGUF | mradermacher | 2024-08-20T15:47:51Z | 18 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"dataset:Lbuk/alpaca_data_pl.json",
"base_model:TeeZee/Qra-13b-instruct",
"base_model:quantized:TeeZee/Qra-13b-instruct",
"license:llama2",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-08-20T11:45:27Z | ---
base_model: TeeZee/Qra-13b-instruct
datasets:
- Lbuk/alpaca_data_pl.json
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TeeZee/Qra-13b-instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF/resolve/main/Qra-13b-instruct.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
CallMeDaniel/Llama-2-7b-chat-hf_vn | CallMeDaniel | 2024-08-20T15:37:17Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"vi",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:quantized:meta-llama/Llama-2-7b-chat-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2023-11-14T16:44:11Z | ---
library_name: transformers
base_model: meta-llama/Llama-2-7b-chat-hf
language:
- vi
---
# Vietnamese Fine-tuned Llama-2-7b-chat-hf
This repository contains a Vietnamese-tuned version of the `Llama-2-7b-chat-hf` model, which has been fine-tuned on Vietnamese datasets using LoRA (Low-Rank Adaptation) techniques.
## Model Details
This model is a fine-tuned version of the Llama-2-7b-chat-hf model, specifically adapted for improved performance on Vietnamese language tasks. It uses LoRA fine-tuning to efficiently adapt the large language model to Vietnamese data while maintaining much of the original model's general knowledge and capabilities.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Daniel Du](https://github.com/danghoangnhan)
- **Model type:** Large Language Model
- **Language(s) (NLP):** Vietnamese
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
- **Language:** Vietnamese
### Direct Use
You can use this model directly with the Hugging Face Transformers library:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
# Load the base model
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
# Load the LoRA configuration and model
peft_model_id = "CallMeMrFern/Llama-2-7b-chat-hf_vn"
config = PeftConfig.from_pretrained(peft_model_id)
model = PeftModel.from_pretrained(base_model, peft_model_id)
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
# Example usage
input_text = "Xin chร o, hรดm nay thแปi tiแบฟt thแบฟ nร o?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
- This model is specifically fine-tuned for Vietnamese and may not perform as well on other languages.
- The model inherits limitations from the base Llama-2-7b-chat-hf model.
- Performance may vary depending on the specific task and domain.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
Dataset: alpaca_translate_GPT_35_10_20k.json (Vietnamese translation of the Alpaca dataset)
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
### Model Architecture and Objective
[More Information Needed]
## Citation
If you use this model in your research, please cite:
```
@misc{vietnamese_llama2_7b_chat,
author = {[Your Name]},
title = {Vietnamese Fine-tuned Llama-2-7b-chat-hf},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://huggingface.co/CallMeMrFern/Llama-2-7b-chat-hf_vn}}
}
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.3.dev0
## Model Description
This model is a fine-tuned version of the Llama-2-7b-chat-hf model, specifically adapted for improved performance on Vietnamese language tasks. It uses LoRA fine-tuning to efficiently adapt the large language model to Vietnamese data while maintaining much of the original model's general knowledge and capabilities.
## Fine-tuning Details
- **Fine-tuning Method:** LoRA (Low-Rank Adaptation)
- **LoRA Config:**
- Target Modules: `["q_proj", "v_proj"]`
- Precision: 8-bit
- **Dataset:** `alpaca_translate_GPT_35_10_20k.json` (Vietnamese translation of the Alpaca dataset)
## Training Procedure
The model was fine-tuned using the following command:
```bash
python finetune/lora.py \
--base_model meta-llama/Llama-2-7b-chat-hf \
--model_type llama \
--data_dir data/general/alpaca_translate_GPT_35_10_20k.json \
--output_dir finetuned/meta-llama/Llama-2-7b-chat-hf \
--lora_target_modules '["q_proj", "v_proj"]' \
--micro_batch_size 1
```
For multi-GPU training, a distributed training approach was used.
## Evaluation Results
[Include any evaluation results, perplexity scores, or benchmark performances here]
## Acknowledgements
- This project is part of the TF07 Course offered by ProtonX.
- We thank the creators of the original Llama-2-7b-chat-hf model and the Hugging Face team for their tools and resources.
- Appreciation to [VietnamAIHub/Vietnamese_LLMs](https://github.com/VietnamAIHub/Vietnamese_LLMs) for the translated dataset. |
Moreza009/aya23-8b-double-quantized | Moreza009 | 2024-08-20T15:32:35Z | 80 | 0 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-08-20T15:28:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BogdanTurbal/model_bert_large_cased_d_gender_bias_ep_2_sqn_a_p_100_v_16 | BogdanTurbal | 2024-08-20T15:19:38Z | 7 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:apache-2.0",
"region:us"
] | null | 2024-08-20T15:18:53Z | ---
license: apache-2.0
base_model: google-bert/bert-large-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_bert_large_cased_d_gender_bias_ep_2_sqn_a_p_100_v_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_bert_large_cased_d_gender_bias_ep_2_sqn_a_p_100_v_16
This model is a fine-tuned version of [google-bert/bert-large-cased](https://huggingface.co/google-bert/bert-large-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3600
- Accuracy: 0.8512
- F1 Micro: 0.8512
- Auc: 0.9188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.4296 | 1.0 | 374 | 0.3806 | 0.8336 | 0.8336 | 0.9112 |
| 0.3477 | 2.0 | 748 | 0.3600 | 0.8512 | 0.8512 | 0.9188 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
ystemsrx/Qwen2-Boundless | ystemsrx | 2024-08-20T15:11:12Z | 544 | 52 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"Qwen",
"causal-lm",
"fine-tuned",
"ethics",
"Chinese",
"text2text-generation",
"zh",
"dataset:ystemsrx/Bad_Data_Alpaca",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-08-19T06:49:29Z | ---
license: apache-2.0
datasets:
- ystemsrx/Bad_Data_Alpaca
language:
- zh
library_name: transformers
pipeline_tag: text2text-generation
tags:
- Qwen
- causal-lm
- fine-tuned
- ethics
- Chinese
---
[ไธญๆ](README.zh.md)
# Qwen2-Boundless
## Overview
Qwen2-Boundless is a fine-tuned model based on Qwen2-1.5B-Instruct, designed to answer various types of questions, including those involving ethical, illegal, pornographic, and violent content. This model has been specifically trained on a dataset that allows it to handle complex and diverse scenarios. It is important to note that the fine-tuning dataset is entirely in Chinese, so the model performs better in Chinese.
> **Warning**: This model is intended for research and testing purposes only. Users should comply with local laws and regulations and are responsible for their actions.
## How to Use
You can load and use the model with the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
device = "cuda" # the device to load the model onto
current_directory = os.path.dirname(os.path.abspath(__file__))
model = AutoModelForCausalLM.from_pretrained(
current_directory,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(current_directory)
prompt = "Hello?"
messages = [
{"role": "system", "content": ""},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
### Continuous Conversation
To enable continuous conversation, use the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import os
device = "cuda" # the device to load the model onto
# Get the current script's directory
current_directory = os.path.dirname(os.path.abspath(__file__))
model = AutoModelForCausalLM.from_pretrained(
current_directory,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(current_directory)
messages = [
{"role": "system", "content": ""}
]
while True:
# Get user input
user_input = input("User: ")
# Add user input to the conversation
messages.append({"role": "user", "content": user_input})
# Prepare the input text
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
# Generate a response
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
# Decode and print the response
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(f"Assistant: {response}")
# Add the generated response to the conversation
messages.append({"role": "assistant", "content": response})
```
### Streaming Response
For applications requiring streaming responses, use the following code:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
from transformers.trainer_utils import set_seed
from threading import Thread
import random
import os
DEFAULT_CKPT_PATH = os.path.dirname(os.path.abspath(__file__))
def _load_model_tokenizer(checkpoint_path, cpu_only):
tokenizer = AutoTokenizer.from_pretrained(checkpoint_path, resume_download=True)
device_map = "cpu" if cpu_only else "auto"
model = AutoModelForCausalLM.from_pretrained(
checkpoint_path,
torch_dtype="auto",
device_map=device_map,
resume_download=True,
).eval()
model.generation_config.max_new_tokens = 512 # For chat.
return model, tokenizer
def _get_input() -> str:
while True:
try:
message = input('User: ').strip()
except UnicodeDecodeError:
print('[ERROR] Encoding error in input')
continue
except KeyboardInterrupt:
exit(1)
if message:
return message
print('[ERROR] Query is empty')
def _chat_stream(model, tokenizer, query, history):
conversation = [
{'role': 'system', 'content': ''},
]
for query_h, response_h in history:
conversation.append({'role': 'user', 'content': query_h})
conversation.append({'role': 'assistant', 'content': response_h})
conversation.append({'role': 'user', 'content': query})
inputs = tokenizer.apply_chat_template(
conversation,
add_generation_prompt=True,
return_tensors='pt',
)
inputs = inputs.to(model.device)
streamer = TextIteratorStreamer(tokenizer=tokenizer, skip_prompt=True, timeout=60.0, skip_special_tokens=True)
generation_kwargs = dict(
input_ids=inputs,
streamer=streamer,
)
thread = Thread(target=model.generate, kwargs=generation_kwargs)
thread.start()
for new_text in streamer:
yield new_text
def main():
checkpoint_path = DEFAULT_CKPT_PATH
seed = random.randint(0, 2**32 - 1) # Generate a random seed
set_seed(seed) # Set the random seed
cpu_only = False
history = []
model, tokenizer = _load_model_tokenizer(checkpoint_path, cpu_only)
while True:
query = _get_input()
print(f"\nUser: {query}")
print(f"\nAssistant: ", end="")
try:
partial_text = ''
for new_text in _chat_stream(model, tokenizer, query, history):
print(new_text, end='', flush=True)
partial_text += new_text
print()
history.append((query, partial_text))
except KeyboardInterrupt:
print('Generation interrupted')
continue
if __name__ == "__main__":
main()
```
## Dataset
The Qwen2-Boundless model was fine-tuned using a specific dataset named `bad_data.json`, which includes a wide range of text content covering topics related to ethics, law, pornography, and violence. The fine-tuning dataset is entirely in Chinese, so the model performs better in Chinese. If you are interested in exploring or using this dataset, you can find it via the following link:
- [bad_data.json Dataset](https://huggingface.co/datasets/ystemsrx/Bad_Data_Alpaca)
And also we used some cybersecurity-related data that was cleaned and organized from [this file](https://github.com/Clouditera/SecGPT/blob/main/secgpt-mini/%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%9B%9E%E7%AD%94%E9%9D%A2%E9%97%AE%E9%A2%98-cot.txt).
## GitHub Repository
For more details about the model and ongoing updates, please visit our GitHub repository:
- [GitHub: ystemsrx/Qwen2-Boundless](https://github.com/ystemsrx/Qwen2-Boundless)
## License
This model and dataset are open-sourced under the Apache 2.0 License.
## Disclaimer
All content provided by this model is for research and testing purposes only. The developers of this model are not responsible for any potential misuse. Users should comply with relevant laws and regulations and are solely responsible for their actions. |
srikarvar/multilingual-e5-small-pairclass-contrastive | srikarvar | 2024-08-20T15:08:30Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:296",
"loss:ContrastiveLoss",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-08-20T15:07:39Z | ---
base_model: intfloat/multilingual-e5-small
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- dot_accuracy
- dot_accuracy_threshold
- dot_f1
- dot_f1_threshold
- dot_precision
- dot_recall
- dot_ap
- manhattan_accuracy
- manhattan_accuracy_threshold
- manhattan_f1
- manhattan_f1_threshold
- manhattan_precision
- manhattan_recall
- manhattan_ap
- euclidean_accuracy
- euclidean_accuracy_threshold
- euclidean_f1
- euclidean_f1_threshold
- euclidean_precision
- euclidean_recall
- euclidean_ap
- max_accuracy
- max_accuracy_threshold
- max_f1
- max_f1_threshold
- max_precision
- max_recall
- max_ap
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:296
- loss:ContrastiveLoss
widget:
- source_sentence: Biography of Queen Elisabeth II
sentences:
- Biography of Queen Elisabeth I
- What are the ingredients of a pizza?
- When was the Declaration of Independence signed?
- source_sentence: What are ingredients of pizza
sentences:
- How do I reset my password?
- What are the ingredients of a pizza?
- What's the capital of France?
- source_sentence: Japan's population count
sentences:
- What's the population of Japan?
- What is the time now?
- What are the benefits of exercise?
- source_sentence: How many states are in India?
sentences:
- What is the capital of Italy?
- What is the speed of light?
- How many states are in the USA?
- source_sentence: Language spoken by the most people
sentences:
- What is the most spoken language in the world?
- How do I get to the nearest train station?
- Who was the first person to walk on the moon?
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-small
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: pair class dev
type: pair-class-dev
metrics:
- type: cosine_accuracy
value: 0.9459459459459459
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.8864079713821411
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.951219512195122
name: Cosine F1
- type: cosine_f1_threshold
value: 0.8166882395744324
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.9069767441860465
name: Cosine Precision
- type: cosine_recall
value: 1.0
name: Cosine Recall
- type: cosine_ap
value: 0.9896978310027928
name: Cosine Ap
- type: dot_accuracy
value: 0.9459459459459459
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 0.8864079713821411
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.951219512195122
name: Dot F1
- type: dot_f1_threshold
value: 0.8166882991790771
name: Dot F1 Threshold
- type: dot_precision
value: 0.9069767441860465
name: Dot Precision
- type: dot_recall
value: 1.0
name: Dot Recall
- type: dot_ap
value: 0.9896978310027928
name: Dot Ap
- type: manhattan_accuracy
value: 0.9459459459459459
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 7.30389404296875
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.951219512195122
name: Manhattan F1
- type: manhattan_f1_threshold
value: 9.542933464050293
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.9069767441860465
name: Manhattan Precision
- type: manhattan_recall
value: 1.0
name: Manhattan Recall
- type: manhattan_ap
value: 0.9896978310027928
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.9459459459459459
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 0.4764680862426758
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.951219512195122
name: Euclidean F1
- type: euclidean_f1_threshold
value: 0.6043685078620911
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.9069767441860465
name: Euclidean Precision
- type: euclidean_recall
value: 1.0
name: Euclidean Recall
- type: euclidean_ap
value: 0.9896978310027928
name: Euclidean Ap
- type: max_accuracy
value: 0.9459459459459459
name: Max Accuracy
- type: max_accuracy_threshold
value: 7.30389404296875
name: Max Accuracy Threshold
- type: max_f1
value: 0.951219512195122
name: Max F1
- type: max_f1_threshold
value: 9.542933464050293
name: Max F1 Threshold
- type: max_precision
value: 0.9069767441860465
name: Max Precision
- type: max_recall
value: 1.0
name: Max Recall
- type: max_ap
value: 0.9896978310027928
name: Max Ap
- task:
type: binary-classification
name: Binary Classification
dataset:
name: pair class test
type: pair-class-test
metrics:
- type: cosine_accuracy
value: 0.9459459459459459
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.8864079713821411
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.951219512195122
name: Cosine F1
- type: cosine_f1_threshold
value: 0.8166882395744324
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.9069767441860465
name: Cosine Precision
- type: cosine_recall
value: 1.0
name: Cosine Recall
- type: cosine_ap
value: 0.9896978310027928
name: Cosine Ap
- type: dot_accuracy
value: 0.9459459459459459
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 0.8864079713821411
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.951219512195122
name: Dot F1
- type: dot_f1_threshold
value: 0.8166882991790771
name: Dot F1 Threshold
- type: dot_precision
value: 0.9069767441860465
name: Dot Precision
- type: dot_recall
value: 1.0
name: Dot Recall
- type: dot_ap
value: 0.9896978310027928
name: Dot Ap
- type: manhattan_accuracy
value: 0.9459459459459459
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 7.30389404296875
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.951219512195122
name: Manhattan F1
- type: manhattan_f1_threshold
value: 9.542933464050293
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.9069767441860465
name: Manhattan Precision
- type: manhattan_recall
value: 1.0
name: Manhattan Recall
- type: manhattan_ap
value: 0.9896978310027928
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.9459459459459459
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 0.4764680862426758
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.951219512195122
name: Euclidean F1
- type: euclidean_f1_threshold
value: 0.6043685078620911
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.9069767441860465
name: Euclidean Precision
- type: euclidean_recall
value: 1.0
name: Euclidean Recall
- type: euclidean_ap
value: 0.9896978310027928
name: Euclidean Ap
- type: max_accuracy
value: 0.9459459459459459
name: Max Accuracy
- type: max_accuracy_threshold
value: 7.30389404296875
name: Max Accuracy Threshold
- type: max_f1
value: 0.951219512195122
name: Max F1
- type: max_f1_threshold
value: 9.542933464050293
name: Max F1 Threshold
- type: max_precision
value: 0.9069767441860465
name: Max Precision
- type: max_recall
value: 1.0
name: Max Recall
- type: max_ap
value: 0.9896978310027928
name: Max Ap
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("srikarvar/multilingual-e5-small-pairclass-contrastive")
# Run inference
sentences = [
'Language spoken by the most people',
'What is the most spoken language in the world?',
'Who was the first person to walk on the moon?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `pair-class-dev`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.9459 |
| cosine_accuracy_threshold | 0.8864 |
| cosine_f1 | 0.9512 |
| cosine_f1_threshold | 0.8167 |
| cosine_precision | 0.907 |
| cosine_recall | 1.0 |
| cosine_ap | 0.9897 |
| dot_accuracy | 0.9459 |
| dot_accuracy_threshold | 0.8864 |
| dot_f1 | 0.9512 |
| dot_f1_threshold | 0.8167 |
| dot_precision | 0.907 |
| dot_recall | 1.0 |
| dot_ap | 0.9897 |
| manhattan_accuracy | 0.9459 |
| manhattan_accuracy_threshold | 7.3039 |
| manhattan_f1 | 0.9512 |
| manhattan_f1_threshold | 9.5429 |
| manhattan_precision | 0.907 |
| manhattan_recall | 1.0 |
| manhattan_ap | 0.9897 |
| euclidean_accuracy | 0.9459 |
| euclidean_accuracy_threshold | 0.4765 |
| euclidean_f1 | 0.9512 |
| euclidean_f1_threshold | 0.6044 |
| euclidean_precision | 0.907 |
| euclidean_recall | 1.0 |
| euclidean_ap | 0.9897 |
| max_accuracy | 0.9459 |
| max_accuracy_threshold | 7.3039 |
| max_f1 | 0.9512 |
| max_f1_threshold | 9.5429 |
| max_precision | 0.907 |
| max_recall | 1.0 |
| **max_ap** | **0.9897** |
#### Binary Classification
* Dataset: `pair-class-test`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.9459 |
| cosine_accuracy_threshold | 0.8864 |
| cosine_f1 | 0.9512 |
| cosine_f1_threshold | 0.8167 |
| cosine_precision | 0.907 |
| cosine_recall | 1.0 |
| cosine_ap | 0.9897 |
| dot_accuracy | 0.9459 |
| dot_accuracy_threshold | 0.8864 |
| dot_f1 | 0.9512 |
| dot_f1_threshold | 0.8167 |
| dot_precision | 0.907 |
| dot_recall | 1.0 |
| dot_ap | 0.9897 |
| manhattan_accuracy | 0.9459 |
| manhattan_accuracy_threshold | 7.3039 |
| manhattan_f1 | 0.9512 |
| manhattan_f1_threshold | 9.5429 |
| manhattan_precision | 0.907 |
| manhattan_recall | 1.0 |
| manhattan_ap | 0.9897 |
| euclidean_accuracy | 0.9459 |
| euclidean_accuracy_threshold | 0.4765 |
| euclidean_f1 | 0.9512 |
| euclidean_f1_threshold | 0.6044 |
| euclidean_precision | 0.907 |
| euclidean_recall | 1.0 |
| euclidean_ap | 0.9897 |
| max_accuracy | 0.9459 |
| max_accuracy_threshold | 7.3039 |
| max_f1 | 0.9512 |
| max_f1_threshold | 9.5429 |
| max_precision | 0.907 |
| max_recall | 1.0 |
| **max_ap** | **0.9897** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 296 training samples
* Columns: <code>label</code>, <code>sentence2</code>, and <code>sentence1</code>
* Approximate statistics based on the first 1000 samples:
| | label | sentence2 | sentence1 |
|:--------|:------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | int | string | string |
| details | <ul><li>0: ~50.68%</li><li>1: ~49.32%</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.39 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.24 tokens</li><li>max: 20 tokens</li></ul> |
* Samples:
| label | sentence2 | sentence1 |
|:---------------|:-------------------------------------------------|:------------------------------------------|
| <code>0</code> | <code>How to improve running speed?</code> | <code>How to train for a marathon?</code> |
| <code>0</code> | <code>What is the distance of a marathon?</code> | <code>How to train for a marathon?</code> |
| <code>1</code> | <code>Mona Lisa painter</code> | <code>Who painted the Mona Lisa?</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.5,
"size_average": true
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 74 evaluation samples
* Columns: <code>label</code>, <code>sentence2</code>, and <code>sentence1</code>
* Approximate statistics based on the first 1000 samples:
| | label | sentence2 | sentence1 |
|:--------|:------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | int | string | string |
| details | <ul><li>0: ~47.30%</li><li>1: ~52.70%</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.18 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 10.15 tokens</li><li>max: 20 tokens</li></ul> |
* Samples:
| label | sentence2 | sentence1 |
|:---------------|:------------------------------------------------|:------------------------------------------------------------|
| <code>1</code> | <code>Bitcoin's current value</code> | <code>What is the price of Bitcoin?</code> |
| <code>1</code> | <code>Who found out about gravity?</code> | <code>Who discovered gravity?</code> |
| <code>1</code> | <code>Language spoken by the most people</code> | <code>What is the most spoken language in the world?</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.5,
"size_average": true
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 2
- `learning_rate`: 3e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 5
- `lr_scheduler_type`: reduce_lr_on_plateau
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: reduce_lr_on_plateau
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | pair-class-dev_max_ap | pair-class-test_max_ap |
|:----------:|:------:|:-------------:|:----------:|:---------------------:|:----------------------:|
| 0 | 0 | - | - | 0.6933 | - |
| 0.9474 | 9 | - | 0.0182 | 0.9142 | - |
| 1.0526 | 10 | 0.0311 | - | - | - |
| 2.0 | 19 | - | 0.0091 | 0.9730 | - |
| 2.1053 | 20 | 0.0119 | - | - | - |
| 2.9474 | 28 | - | 0.0090 | 0.9878 | - |
| 3.1579 | 30 | 0.0074 | - | - | - |
| 4.0 | 38 | - | 0.0084 | 0.9891 | - |
| 4.2105 | 40 | 0.005 | - | - | - |
| **4.7368** | **45** | **-** | **0.0084** | **0.9897** | **0.9897** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### ContrastiveLoss
```bibtex
@inproceedings{hadsell2006dimensionality,
author={Hadsell, R. and Chopra, S. and LeCun, Y.},
booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
title={Dimensionality Reduction by Learning an Invariant Mapping},
year={2006},
volume={2},
number={},
pages={1735-1742},
doi={10.1109/CVPR.2006.100}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
BogdanTurbal/model_bert_large_cased_d_hate_bias_ep_2_sqn_a_p_100_v_16 | BogdanTurbal | 2024-08-20T15:06:44Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:apache-2.0",
"region:us"
] | null | 2024-08-20T15:05:56Z | ---
license: apache-2.0
base_model: google-bert/bert-large-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_bert_large_cased_d_hate_bias_ep_2_sqn_a_p_100_v_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_bert_large_cased_d_hate_bias_ep_2_sqn_a_p_100_v_16
This model is a fine-tuned version of [google-bert/bert-large-cased](https://huggingface.co/google-bert/bert-large-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3652
- Accuracy: 0.8553
- F1 Micro: 0.8553
- Auc: 0.9293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.3935 | 1.0 | 380 | 0.3683 | 0.8446 | 0.8446 | 0.9193 |
| 0.3148 | 2.0 | 760 | 0.3652 | 0.8553 | 0.8553 | 0.9293 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
youngyoon911/sd-class-butterflies-32 | youngyoon911 | 2024-08-20T15:04:22Z | 46 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-08-20T15:02:17Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class ๐งจ](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ๐ฆ.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('youngyoon911/sd-class-butterflies-32')
image = pipeline().images[0]
image
|
Liu-Xiang/gemma7b-ft-lora-sql-v2 | Liu-Xiang | 2024-08-20T15:02:17Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-20T14:59:07Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GPT4All-Community/gemma-2-9b-it-GGUF | GPT4All-Community | 2024-08-20T14:49:53Z | 644 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"GGUF",
"GPT4All-community",
"GPT4All",
"conversational",
"text-generation",
"en",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"license:gemma",
"region:us"
] | text-generation | 2024-08-04T15:25:23Z | ---
base_model: google/gemma-2-9b-it
pipeline_tag: text-generation
inference: false
model_creator: Google
model_name: Gemma-2-9b-it
model_type: gemma2
license_link: https://ai.google.dev/gemma/terms
language:
- en
library_name: transformers
license: gemma
quantized_by: ThiloteE
tags:
- text-generation-inference
- transformers
- GGUF
- GPT4All-community
- GPT4All
- conversational
---
> [!NOTE]
> This is a model that is assumed to perform well, but may require more testing and user feedback. Be aware, only models featured within the GUI of GPT4All, are curated and officially supported by Nomic. Use at your own risk.
# About
<!-- ### quantize_version: 3 -->
<!-- ### convert_type: hf -->
- Static quants of https://huggingface.co/google/gemma-2-9b-it at commit [1937c70](https://huggingface.co/google/gemma-2-9b-it/commit/1937c70277fcc5f7fb0fc772fc5bc69378996e71)
- Quantized by [ThiloteE](https://huggingface.co/ThiloteE) with llama.cpp commit [e09a800](https://github.com/ggerganov/llama.cpp/commit/e09a800f9a9b19c73aa78e03b4c4be8ed988f3e6)
These quants were created with a customized configuration that have been proven to not cause visible end of string (eos) tokens during inference with [GPT4All](https://www.nomic.ai/gpt4all).
The config.json, generation_config.json and tokenizer_config.json differ from the original configuration as can be found in the original model's repository at the time of creation of these quants.
# Prompt Template (for GPT4All)
This model does not have a system prompt by default.
Chat Template:
```
<start_of_turn>user
%1<end_of_turn>
<start_of_turn>model
%2<end_of_turn>
```
# Context Length
`8192`
Use a lower value during inference, if you do not have enough RAM or VRAM.
# Provided Quants
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/GPT4All-Community/gemma-2-9b-it-GGUF/resolve/main/gemma-2-9b-it-Q4_0.gguf) | Q4_0 | 5.44 | fast, recommended |
| [GGUF](https://huggingface.co/GPT4All-Community/gemma-2-9b-it-GGUF/resolve/main/gemma-2-9b-it-f16.gguf) | f16 | 18.5 | 16 bpw, overkill |
# About GGUF
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF) for
more details, including on how to concatenate multi-part files.
Here is a handy graph by ikawrakow comparing some quant types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
# Thanks
I thank Mradermacher and TheBloke for Inspiration to this model card and their contributions to open source. Also 3Simplex for lots of help along the way.
Shoutout to the GPT4All and llama.cpp communities :-)
<br>
<br>
<br>
<br>
------
<!-- footer end -->
<!-- original-model-card start -->
# Original Model card:
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-9b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
device_map="auto")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-9b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-9b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
<!-- original-model-card end -->
<!-- end -->
|
phpthinh/pho5e-5 | phpthinh | 2024-08-20T14:48:57Z | 193 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-feature-extraction | 2024-08-20T14:44:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phpthinh/pho2e-5 | phpthinh | 2024-08-20T14:43:23Z | 188 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-feature-extraction | 2024-08-20T14:37:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF | mradermacher | 2024-08-20T14:37:31Z | 267 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jsgreenawalt/gemma-2-9B-it-advanced-v2.1",
"base_model:quantized:jsgreenawalt/gemma-2-9B-it-advanced-v2.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-20T13:09:32Z | ---
base_model: jsgreenawalt/gemma-2-9B-it-advanced-v2.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jsgreenawalt/gemma-2-9B-it-advanced-v2.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
phpthinh/pho1e-5 | phpthinh | 2024-08-20T14:36:11Z | 188 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-feature-extraction | 2024-08-20T14:27:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuantFactory/gemma-2-9B-it-advanced-v2.1-GGUF | QuantFactory | 2024-08-20T14:36:10Z | 198 | 10 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3",
"base_model:merge:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3",
"base_model:google/gemma-2-9b-it",
"base_model:merge:google/gemma-2-9b-it",
"base_model:princeton-nlp/gemma-2-9b-it-SimPO",
"base_model:merge:princeton-nlp/gemma-2-9b-it-SimPO",
"base_model:wzhouad/gemma-2-9b-it-WPO-HB",
"base_model:merge:wzhouad/gemma-2-9b-it-WPO-HB",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-20T12:52:59Z |
---
base_model:
- wzhouad/gemma-2-9b-it-WPO-HB
- UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
- google/gemma-2-9b-it
- princeton-nlp/gemma-2-9b-it-SimPO
library_name: transformers
tags:
- mergekit
- merge
- merge
---

# QuantFactory/gemma-2-9B-it-advanced-v2.1-GGUF
This is quantized version of [jsgreenawalt/gemma-2-9B-it-advanced-v2.1](https://huggingface.co/jsgreenawalt/gemma-2-9B-it-advanced-v2.1) created using llama.cpp
# Original Model Card
# Gemma Advanced V2.1
This is a merge of the 'smartest' advanced fine-tunes available for Gemma-2-9b-it. It includes WPO, SimPO, and SPPO. The merge was performed via the SOTA 'della' merge method. Merge parameters have been hand-tuned for best results. The Q8_0 quant is highly recommended until better quants come along.
## Notes and observations:
* The extreme temperature sensitivity from V1 has been fixed, no longer needs to be run at lower temperatures
* Has a somewhat different writing style than any of the parent models
* Great instruction following
* Tracks plot details well and has good situational understanding
* Seems to have a good understanding of psychology, emotions and creative writing
* More 'sane' than base gemma-it, SPPO, or SimPO - not as prone to 'Cruella De Vil' or 'Evil Sorceress' like SPPO or SimPO, when portraying characters
* Would likely serve as a good base for further merges
* I'm looking for a job, if you're hiring. I'm a skilled Python developer who brings strong devops skills along with an ever-growing knowledge of machine learning pipelines and models. Message me if you want to talk about what I can bring to your team.
* Overall, this feels like a very useful and successful merge.
Quantized GGUFs can be found here: https://huggingface.co/jsgreenawalt/gemma-2-9B-it-advanced-v2.1-GGUF
I'll link to other quants as they appear.
# sample ollama Modelfile
```yaml
FROM /path/to/file/gemma-2-9B-it-advanced-v2.1-Q8_0.gguf
PARAMETER stop "<start_of_turn>"
PARAMETER stop "<end_of_turn>"
PARAMETER num_ctx 8192
TEMPLATE """<start_of_turn>user
{{ if .System }}{{ .System }} {{ end }}{{ .Prompt }}<end_of_turn>
<start_of_turn>model
{{ .Response }}<end_of_turn>"""
```
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della merge method using [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) as a base.
### Models Merged
The following models were included in the merge:
* [wzhouad/gemma-2-9b-it-WPO-HB](https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB)
* [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO)
* [UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: google/gemma-2-9b-it
- model: wzhouad/gemma-2-9b-it-WPO-HB
parameters:
density: 0.55
weight: 0.6
- model: princeton-nlp/gemma-2-9b-it-SimPO
parameters:
density: 0.35
weight: 0.6
- model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
parameters:
density: 0.25
weight: 0.4
merge_method: della
base_model: google/gemma-2-9b-it
parameters:
normalize: true
int8_mask: true
lambda: 1.0
epsilon: 0.1
dtype: float16
```
|
DW-ReCo/spot_llama3_4bit | DW-ReCo | 2024-08-20T14:33:02Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-08-19T14:37:38Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** DW-ReCo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Loulou57/scarlett_johansson_flux | Loulou57 | 2024-08-20T14:31:54Z | 9 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-08-20T12:44:21Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
instance_prompt: SC_JO
---
# Scarlett_Johansson_Flux
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SC_JO` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Loulou57/scarlett_johansson_flux', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
PrunaAI/ContactDoctor-Bio-Medical-Llama-3-8B-bnb-8bit-smashed | PrunaAI | 2024-08-20T14:20:33Z | 6 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"base_model:ContactDoctor/Bio-Medical-Llama-3-8B",
"base_model:quantized:ContactDoctor/Bio-Medical-Llama-3-8B",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2024-08-20T14:16:11Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ContactDoctor/Bio-Medical-Llama-3-8B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ContactDoctor/Bio-Medical-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/ContactDoctor-Bio-Medical-Llama-3-8B-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ContactDoctor/Bio-Medical-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ContactDoctor/Bio-Medical-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
ans-imran/bert-finetuned-ner | ans-imran | 2024-08-20T14:19:55Z | 7 | 0 | null | [
"safetensors",
"bert",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-08-20T13:41:16Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9373134328358209
- name: Recall
type: recall
value: 0.9511948838774823
- name: F1
type: f1
value: 0.9442031406615435
- name: Accuracy
type: accuracy
value: 0.9865779713898863
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Precision: 0.9373
- Recall: 0.9512
- F1: 0.9442
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0771 | 1.0 | 1756 | 0.0644 | 0.9180 | 0.9382 | 0.9280 | 0.9831 |
| 0.0369 | 2.0 | 3512 | 0.0599 | 0.9404 | 0.9514 | 0.9459 | 0.9868 |
| 0.0227 | 3.0 | 5268 | 0.0617 | 0.9373 | 0.9512 | 0.9442 | 0.9866 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AlexWortega/gemma_60k | AlexWortega | 2024-08-20T14:18:58Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-20T14:07:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
briibaajun/whisper-small-sw | briibaajun | 2024-08-20T14:14:47Z | 6 | 0 | null | [
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"sw",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-08-19T15:34:03Z | ---
language:
- sw
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Sw - Sabrina Msuya
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.426296206141114
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Sw - Sabrina Msuya
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1728
- Wer Ortho: 64.2802
- Wer: 13.4263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|
| 0.1221 | 1.6287 | 500 | 0.1728 | 64.2802 | 13.4263 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
ianccwip/flux-dev-flwrs-v2-lora | ianccwip | 2024-08-20T14:08:19Z | 450 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-08-20T11:15:34Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
instance_prompt: in the style of flwrs
---
# Flux Dev Flwrs V2 Lora
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `in the style of flwrs` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ianccwip/flux-dev-flwrs-v2-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
asr-africa/wav2vec2-xlsr-fula-google-fleurs-5-hours-plus-lm | asr-africa | 2024-08-20T14:00:51Z | 6 | 1 | null | [
"safetensors",
"wav2vec2",
"generated_from_trainer",
"dataset:fleurs",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-08-20T13:57:57Z | ---
base_model: facebook/wav2vec2-xls-r-300m
datasets:
- fleurs
license: apache-2.0
metrics:
- wer
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xlsr-fula-google-fleurs-5-hours
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: fleurs
type: fleurs
config: ff_sn
split: None
args: ff_sn
metrics:
- type: wer
value: 0.646049896049896
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-fula-google-fleurs-5-hours
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the fleurs dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1949
- Wer: 0.6460
- Cer: 0.2359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 7.1138 | 10.96 | 200 | 2.9561 | 1.0 | 1.0 |
| 2.8708 | 21.92 | 400 | 2.0221 | 1.0 | 0.6369 |
| 1.0031 | 32.88 | 600 | 0.9750 | 0.6509 | 0.2222 |
| 0.4471 | 43.84 | 800 | 1.1949 | 0.6460 | 0.2359 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu118
- Datasets 2.17.0
- Tokenizers 0.15.2
|
GautamR/akai_flow_classifier_kmai_dev_test_bot | GautamR | 2024-08-20T13:57:53Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-08-20T13:53:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GaetanMichelet/Llama-31-8B_task-1_120-samples_config-2_full | GaetanMichelet | 2024-08-20T13:48:03Z | 14 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:GaetanMichelet/chat-60_ft_task-1",
"dataset:GaetanMichelet/chat-120_ft_task-1",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-08-20T13:14:56Z | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
- GaetanMichelet/chat-60_ft_task-1
- GaetanMichelet/chat-120_ft_task-1
library_name: peft
license: llama3.1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-31-8B_task-1_120-samples_config-2_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-31-8B_task-1_120-samples_config-2_full
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the GaetanMichelet/chat-60_ft_task-1 and the GaetanMichelet/chat-120_ft_task-1 datasets.
It achieves the following results on the evaluation set:
- Loss: 0.8694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 2.448 | 0.9091 | 5 | 2.3732 |
| 2.2361 | 2.0 | 11 | 2.0279 |
| 1.8253 | 2.9091 | 16 | 1.7251 |
| 1.3791 | 4.0 | 22 | 1.2330 |
| 1.0878 | 4.9091 | 27 | 1.0337 |
| 0.9771 | 6.0 | 33 | 0.9739 |
| 0.8967 | 6.9091 | 38 | 0.9426 |
| 0.8815 | 8.0 | 44 | 0.9129 |
| 0.816 | 8.9091 | 49 | 0.8952 |
| 0.748 | 10.0 | 55 | 0.8762 |
| 0.6939 | 10.9091 | 60 | 0.8727 |
| 0.6449 | 12.0 | 66 | 0.8694 |
| 0.5874 | 12.9091 | 71 | 0.8921 |
| 0.4934 | 14.0 | 77 | 0.9429 |
| 0.4382 | 14.9091 | 82 | 1.0083 |
| 0.347 | 16.0 | 88 | 1.0592 |
| 0.2565 | 16.9091 | 93 | 1.1458 |
| 0.1926 | 18.0 | 99 | 1.2523 |
| 0.1477 | 18.9091 | 104 | 1.4710 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.1.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
AchrafGhribi31/llama3-esg-8b-merged-4bit_V0 | AchrafGhribi31 | 2024-08-20T13:37:46Z | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-08-20T13:36:28Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** AchrafGhribi31
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
saad7489/segformer-b0-finetuned-segments-SixrayKnife8-20-2024 | saad7489 | 2024-08-20T13:31:20Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"region:us"
] | image-segmentation | 2024-08-20T13:02:13Z | ---
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-SixrayKnife8-20-2024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-SixrayKnife8-20-2024
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the saad7489/SixraygunTest dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2632
- Mean Iou: 0.7518
- Mean Accuracy: 0.8442
- Overall Accuracy: 0.9846
- Accuracy Bkg: 0.9934
- Accuracy Knife: 0.6638
- Accuracy Gun: 0.8755
- Iou Bkg: 0.9864
- Iou Knife: 0.5722
- Iou Gun: 0.6969
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Bkg | Accuracy Knife | Accuracy Gun | Iou Bkg | Iou Knife | Iou Gun |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------:|:--------------:|:------------:|:-------:|:---------:|:-------:|
| 0.7462 | 5.0 | 20 | 0.8680 | 0.5725 | 0.7955 | 0.9552 | 0.9653 | 0.6150 | 0.8064 | 0.9557 | 0.3394 | 0.4223 |
| 0.5675 | 10.0 | 40 | 0.5259 | 0.5797 | 0.6730 | 0.9685 | 0.9873 | 0.3829 | 0.6486 | 0.9690 | 0.3247 | 0.4455 |
| 0.5079 | 15.0 | 60 | 0.4394 | 0.6394 | 0.7578 | 0.9723 | 0.9859 | 0.5491 | 0.7385 | 0.9731 | 0.4658 | 0.4794 |
| 0.3976 | 20.0 | 80 | 0.3820 | 0.6781 | 0.7446 | 0.9792 | 0.9942 | 0.5443 | 0.6952 | 0.9802 | 0.4938 | 0.5601 |
| 0.3527 | 25.0 | 100 | 0.3454 | 0.7173 | 0.8050 | 0.9816 | 0.9928 | 0.6128 | 0.8094 | 0.9829 | 0.5373 | 0.6318 |
| 0.3571 | 30.0 | 120 | 0.3192 | 0.7336 | 0.8386 | 0.9826 | 0.9917 | 0.6508 | 0.8734 | 0.9843 | 0.5518 | 0.6646 |
| 0.3201 | 35.0 | 140 | 0.2858 | 0.7399 | 0.8390 | 0.9834 | 0.9924 | 0.6540 | 0.8706 | 0.9851 | 0.5637 | 0.6709 |
| 0.3205 | 40.0 | 160 | 0.2774 | 0.7482 | 0.8301 | 0.9846 | 0.9944 | 0.6447 | 0.8512 | 0.9864 | 0.5673 | 0.6911 |
| 0.2899 | 45.0 | 180 | 0.2677 | 0.7497 | 0.8399 | 0.9845 | 0.9937 | 0.6581 | 0.8679 | 0.9864 | 0.5679 | 0.6948 |
| 0.2672 | 50.0 | 200 | 0.2632 | 0.7518 | 0.8442 | 0.9846 | 0.9934 | 0.6638 | 0.8755 | 0.9864 | 0.5722 | 0.6969 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
RhapsodyAI/MiniCPM-V-Embedding-preview | RhapsodyAI | 2024-08-20T13:31:06Z | 154 | 46 | transformers | [
"transformers",
"pytorch",
"minicpmv",
"feature-extraction",
"information retrieval",
"embedding model",
"visual information retrieval",
"custom_code",
"en",
"arxiv:2408.01800",
"license:apache-2.0",
"region:us"
] | feature-extraction | 2024-06-27T12:23:01Z | ---
language:
- en
tags:
- information retrieval
- embedding model
- visual information retrieval
metrics:
- recall
pipeline_tag: feature-extraction
license: apache-2.0
---
# MiniCPM-Visual-Embedding: OCR-free Visual Document Embedding Model as Your Personal Librarian
The model only takes images as document-side inputs and produce vectors representing document pages. Memex is trained with over 200k query-visual document pairs, including textual document, visual document, arxiv figures, plots, charts, industry documents, textbooks, ebooks, and openly-available PDFs, etc. Its performance is on a par with our ablation text embedding model on text-oriented documents, and an advantages on visually-intensive documents.
Our model is capable of:
- Help you read a long visually-intensive or text-oriented PDF document and find the pages that answer your question.
- Help you build a personal library and retrieve book pages from a large collection of books.
- It has only 2.8B parameters, and has the potential to run on your PC.
- It works like human: read and comprehend with **vision** and remember **multimodal** information in hippocampus.

# News
- 2024-08-18: ๐ We released a new [end-to-end Visual RAG huggingface demo](https://huggingface.co/spaces/bokesyo/MiniCPMV-RAG-PDFQA), which supports **both retrieval and generation**, which means, you can use our system to **answer your questions within a long PDF** now! This demo is also locally-deployable, clone the codes in the space and run on your own device.
- 2024-08-17: ๐ We open-sourced [cleaned version of training codebase](https://github.com/RhapsodyAILab/MiniCPM-V-Embedding-v0-Train) for MiniCPM-Visual-Embedding, which supports **deepspeed zero stage 1,2** and **large batchsize** like `4096` for full-parameter training to turn VLMs into dense retrievers. We also developed methods to filter training datasets and generating queries using unlablled datasets. We supports **multi-nodes, multi-GPUs** high-efficiency **evaluation** on large retrieval datasets. With such efforts, we support up to `20B` VLM contrastive learning with `4096` batch size. We have tested that one can train a VLM dense retriever with only **1 GPU, but with batch size of `4096`**.
- 2024-07-14: ๐ค We released **online huggingface demo**! Try our [online demo](https://huggingface.co/spaces/bokesyo/MiniCPM_Visual_Document_Retriever_Demo)! This demo is also locally-deployable, clone the codes in the space and run on your own device.
- 2024-07-13: ๐ป We released a **locally deployable command-line based demo** for users to retireve most relavant pages from a given PDF file (could be very long), take a look at [pipeline.py](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/blob/main/pipeline.py).
- 2024-06-27: ๐ We released our first visual embedding model checkpoint on [huggingface](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0).
- 2024-05-08: ๐ We [open-sourced](https://github.com/RhapsodyAILab/minicpm-visual-embedding-v0) our training code (full-parameter tuning with GradCache and DeepSpeed zero-stage2, supports large batch size across multiple GPUs with zero-stage1) and eval code.
# Deploy on your PC
**Please make sure you have at least 32GB memory on your PC.**
- Apple M1/M2/M3 with 32GB memory.
- x86 CPU with 32GB memory.
- x86 CPU with 32GB memory + Nvidia GPU with 16GB memory.
### Install dependencies
Use pip to install all dependencies:
```
Pillow==10.1.0
timm==0.9.10
torch==2.1.2
torchvision==0.16.2
transformers==4.36.0
sentencepiece==0.1.99
numpy==1.26.0
```
### Download model weights and modeling file
Use one of the following methods:
- Download with git clone.
```bash
git lfs install
git clone https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0
```
- Download with huggingface-hub.
```bash
pip install huggingface-hub
huggingface-cli download --resume-download RhapsodyAI/minicpm-visual-embedding-v0 --local-dir minicpm-visual-embedding-v0 --local-dir-use-symlinks False
```
### Launch demo
Install `gradio` first.
```bash
pip install gradio
```
Clone demo source code.
- For retrieval-only demo (without generation), you should clone https://huggingface.co/spaces/bokesyo/MiniCPM_Visual_Document_Retriever_Demo.
- For retrieval and generation (full RAG pipeline), you should clone https://huggingface.co/spaces/bokesyo/MiniCPMV-RAG-PDFQA.
```bash
git clone https://huggingface.co/spaces/bokesyo/MiniCPM_Visual_Document_Retriever_Demo
git clone https://huggingface.co/spaces/bokesyo/MiniCPMV-RAG-PDFQA
```
For `retrieval and generation` demo, you need to also install `flash_attn`.
Adapt the code in `app.py` according to your device.
- For M1/M2/M3 users, please make sure `model = model.to(device='mps', dtype=torch.float16)` then run `PYTORCH_ENABLE_MPS_FALLBACK=1 python app.py`.
- For x86 CPU users, please remove `model = model.to(device)` then run `python app.py`.
- For x86 CPU + Nvidia GPU users, please make sure `model = model.to('cuda')` then run `python app.py`.
- If you encountered an error, please open an issue [here](https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0/discussions), we will respond soon.
# For research purpose
To run the model for research purpose, please refer the following code:
```python
from transformers import AutoModel
from transformers import AutoTokenizer
from PIL import Image
import torch
device = 'cuda:0'
# Load model, be sure to substitute `model_path` by your model path
model_path = '/local/path/to/model'
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
model.to(device)
# Load image to PIL.Image object
image_1 = Image.open('/local/path/to/images/memex.png').convert('RGB')
image_2 = Image.open('/local/path/to/images/us2020.png').convert('RGB')
image_3 = Image.open('/local/path/to/images/hard_negative.png').convert('RGB')
# User query
query_instruction = 'Represent this query for retrieving relavant document: '
query = 'Who was elected as president of United States in 2020?'
query_full = query_instruction + query
# Embed image documents
with torch.no_grad():
p_reps = model(text=['', '', ''], image=[image_1, image_2, image_3], tokenizer=tokenizer).reps
# Embed text queries
with torch.no_grad():
q_reps = model(text=[query_full], image=[None], tokenizer=tokenizer).reps # [B, s, d]
# Calculate similarities
scores = torch.matmul(q_reps, p_reps.T)
print(scores)
# tensor([[-0.0112, 0.3316, 0.2376]], device='cuda:0')
```
# Todos
- [x] Release huggingface space demo.
- [ ] Release the evaluation results.
- [ ] Release technical report.
# Limitations
- This checkpoint is an alpha version, and may not be strong in your tasks, for bad case, please create an issue to let us know, many thanks!
- The modeling script `modeling_minicpmv` on `huggingface` is not standard yet, the inference code could be further improved.
- The inference speed is low, because vision encoder uses `timm`, which does not yet support `flash-attn`.
- The model performs not well on Chinese and other non-English information retrieval tasks.
# Citation
If you find our work useful, please consider cite us:
```bibtex
@misc{RhapsodyEmbedding2024,
author = {Rhapsody Group, OpenBMB},
title = {Memex: OCR-free Visual Document Embedding Model as Your Personal Librarian},
year = {2024},
howpublished = {\url{https://huggingface.co/RhapsodyAI/minicpm-visual-embedding-v0}},
note = {Accessed: 2024-06-28}
}
```
Thanks to MiniCPM-V-2.0 `arxiv.org/abs/2408.01800`, without which there won't be `minicpm-visual-embedding`.
# ไธญๆๆๆกฃ
https://www.53ai.com/news/RAG/2024082047058.html
https://blog.csdn.net/bokesyo/article/details/141335670
https://developer.aliyun.com/article/1590698?spm=a2c6h.13148508.setting.14.b4e94f0eIQp59B
https://cloud.tencent.com/developer/article/2446218
|
mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF | mradermacher | 2024-08-20T13:15:52Z | 59 | 5 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jsgreenawalt/gemma-2-9B-it-advanced-v2.1",
"base_model:quantized:jsgreenawalt/gemma-2-9B-it-advanced-v2.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-20T12:40:45Z | ---
base_model: jsgreenawalt/gemma-2-9B-it-advanced-v2.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jsgreenawalt/gemma-2-9B-it-advanced-v2.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.IQ3_XS.gguf) | IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.IQ3_S.gguf) | IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.IQ3_M.gguf) | IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-9B-it-advanced-v2.1-GGUF/resolve/main/gemma-2-9B-it-advanced-v2.1.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
arvnoodle/lora-trained-xl | arvnoodle | 2024-08-20T13:02:06Z | 7 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-08-15T03:58:42Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
tags:
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- lora
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
instance_prompt: a photo of sks dog
widget:
- text: A photo of sks dog in a bucket
output:
url: image_0.png
- text: A photo of sks dog in a bucket
output:
url: image_1.png
- text: A photo of sks dog in a bucket
output:
url: image_2.png
- text: A photo of sks dog in a bucket
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - arvnoodle/lora-trained-xl
<Gallery />
## Model description
These are arvnoodle/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](arvnoodle/lora-trained-xl/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
saad7489/segformer-b0-finetuned-segments-SixrayKnife8-19-2024 | saad7489 | 2024-08-20T12:56:21Z | 8 | 0 | null | [
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"region:us"
] | image-segmentation | 2024-08-20T06:40:37Z | ---
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-SixrayKnife8-19-2024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-SixrayKnife8-19-2024
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the saad7489/SixraygunTest dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1817
- Mean Iou: 0.8160
- Mean Accuracy: 0.8823
- Overall Accuracy: 0.9881
- Accuracy Bkg: 0.9954
- Accuracy Gun: 0.7759
- Accuracy Knife: 0.8755
- Iou Bkg: 0.9890
- Iou Gun: 0.7014
- Iou Knife: 0.7574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Bkg | Accuracy Gun | Accuracy Knife | Iou Bkg | Iou Gun | Iou Knife |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------:|:------------:|:--------------:|:-------:|:-------:|:---------:|
| 0.4406 | 5.0 | 20 | 0.4093 | 0.7210 | 0.7883 | 0.9804 | 0.9938 | 0.6719 | 0.6991 | 0.9807 | 0.5730 | 0.6092 |
| 0.3699 | 10.0 | 40 | 0.3327 | 0.7327 | 0.7880 | 0.9819 | 0.9954 | 0.6559 | 0.7128 | 0.9824 | 0.5724 | 0.6432 |
| 0.31 | 15.0 | 60 | 0.3035 | 0.7698 | 0.8614 | 0.9842 | 0.9926 | 0.7207 | 0.8709 | 0.9853 | 0.6217 | 0.7023 |
| 0.2852 | 20.0 | 80 | 0.2649 | 0.7817 | 0.8711 | 0.9850 | 0.9928 | 0.7453 | 0.8752 | 0.9860 | 0.6423 | 0.7168 |
| 0.2583 | 25.0 | 100 | 0.2329 | 0.7936 | 0.8693 | 0.9863 | 0.9943 | 0.7497 | 0.8639 | 0.9873 | 0.6628 | 0.7307 |
| 0.2521 | 30.0 | 120 | 0.2194 | 0.7975 | 0.8778 | 0.9867 | 0.9942 | 0.7530 | 0.8862 | 0.9879 | 0.6731 | 0.7316 |
| 0.2357 | 35.0 | 140 | 0.2044 | 0.8042 | 0.8804 | 0.9871 | 0.9944 | 0.7635 | 0.8833 | 0.9881 | 0.6789 | 0.7456 |
| 0.2198 | 40.0 | 160 | 0.1929 | 0.8126 | 0.8789 | 0.9878 | 0.9953 | 0.7685 | 0.8728 | 0.9888 | 0.6937 | 0.7552 |
| 0.1909 | 45.0 | 180 | 0.1837 | 0.8151 | 0.8810 | 0.9880 | 0.9954 | 0.7726 | 0.8750 | 0.9890 | 0.6997 | 0.7568 |
| 0.1908 | 50.0 | 200 | 0.1817 | 0.8160 | 0.8823 | 0.9881 | 0.9954 | 0.7759 | 0.8755 | 0.9890 | 0.7014 | 0.7574 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
BogdanTurbal/model_roberta_large_d_hate_bias_hate_bias_ep_2_7_a_sqn_a_b_p_100_5_v_15 | BogdanTurbal | 2024-08-20T12:55:04Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"region:us"
] | null | 2024-08-20T12:54:22Z | ---
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_roberta_large_d_hate_bias_hate_bias_ep_2_7_a_sqn_a_b_p_100_5_v_15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_roberta_large_d_hate_bias_hate_bias_ep_2_7_a_sqn_a_b_p_100_5_v_15
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5201
- Accuracy: 0.7821
- F1 Micro: 0.7821
- Auc: 0.8528
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.7652 | 1.0 | 38 | 0.6989 | 0.5 | 0.5 | 0.6244 |
| 0.6958 | 2.0 | 76 | 0.6845 | 0.5090 | 0.5090 | 0.7243 |
| 0.6112 | 3.0 | 114 | 0.5881 | 0.6974 | 0.6974 | 0.7726 |
| 0.4671 | 4.0 | 152 | 0.5185 | 0.7590 | 0.7590 | 0.8284 |
| 0.4422 | 5.0 | 190 | 0.5444 | 0.7640 | 0.7640 | 0.8375 |
| 0.4041 | 6.0 | 228 | 0.5199 | 0.7796 | 0.7796 | 0.8490 |
| 0.4677 | 7.0 | 266 | 0.5201 | 0.7821 | 0.7821 | 0.8528 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mradermacher/Qra-13b-instruct-GGUF | mradermacher | 2024-08-20T12:40:54Z | 39 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"dataset:Lbuk/alpaca_data_pl.json",
"base_model:TeeZee/Qra-13b-instruct",
"base_model:quantized:TeeZee/Qra-13b-instruct",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-08-20T06:20:52Z | ---
base_model: TeeZee/Qra-13b-instruct
datasets:
- Lbuk/alpaca_data_pl.json
language:
- en
library_name: transformers
license: llama2
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TeeZee/Qra-13b-instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qra-13b-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qra-13b-instruct-GGUF/resolve/main/Qra-13b-instruct.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ramikan-BR/P3-mini-v1 | Ramikan-BR | 2024-08-20T12:40:22Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gguf",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:quantized:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-20T11:40:34Z | ---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Input:
# Write a code to create the snake game
### Output:
# Here is a simple implementation of the classic Snake game in Python using the Pygame library. This code will create a window with a snake that moves in a grid, eating apples and growing in length.
```python
import pygame
import random
import time
# Initialize Pygame
pygame.init()
# Set the dimensions of the game window
width, height = 600, 600
screen = pygame.display.set_mode((width, height))
pygame.display.set_caption('Snake Game')
# Define colors
white = (255, 255, 255)
black = (0, 0, 0)
red = (255, 0, 0)
green = (0, 255, 0)
# Define the snake
snake_pos = [[100, 50], [90, 50], [80, 50]]
snake_dir = [0, -1]
# Define the apple
apple_pos = [random.randint(50, width-50), random.randint(50, height-50)]
# Define the score
score = 0
# Define the clock
clock = pygame.time.Clock()
# Game loop
running = True
while running:
# Handle events
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
# Move the snake
new_head = [snake_pos[0][0] + snake_dir[0], snake_pos[0][1] + snake_dir[1]]
# Check if the snake eats the apple
if new_head[0] == apple_pos[0] and new_head[1] == apple_pos[1]:
score += 1
apple_pos = [random.randint(50, width-50), random.randint(50, height-50)]
snake_pos.insert(0, new_head)
if len(snake_pos) > 10:
snake_pos.pop()
else:
snake_pos.insert(0, new_head)
snake_pos.pop()
# Check if the snake hits the wall
if new_head[0] < 0 or new_head[0] > width-100 or new_head[1] < 0 or new_head[1] > height-100:
running = False
# Draw the game
screen.fill(white)
for pos in snake_pos:
pygame.draw.rect(screen, black, pygame.Rect(pos[0], pos[1], 10, 10))
pygame.draw.rect(screen, red, pygame.Rect(apple_pos[0], apple_pos[1], 10, 10))
pygame.display.flip()
# Set the frame rate
clock.tick(10)
# Print the score
print(f'Score: {score}')
pygame.quit()
```
This code will create a window with a snake that moves in a grid, eating apples and growing in length. The score will be printed to the console.
Note that this is a very basic implementation and can be improved in many ways, such as adding a start screen, better collision detection, and more features.<|endoftext|>
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF | mradermacher | 2024-08-20T12:37:18Z | 23 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"dataset:yahma/alpaca-cleaned",
"base_model:EpistemeAI/Fireball-Mistral-Nemo-Base-2407-sft-v1",
"base_model:quantized:EpistemeAI/Fireball-Mistral-Nemo-Base-2407-sft-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-08-20T03:38:52Z | ---
base_model: EpistemeAI/Fireball-Mistral-Nemo-Base-2407-sft-v1
datasets:
- yahma/alpaca-cleaned
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/EpistemeAI/Fireball-Mistral-Nemo-Base-2407-sft-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Mistral-Nemo-Base-2407-sft-v1-i1-GGUF/resolve/main/Fireball-Mistral-Nemo-Base-2407-sft-v1.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/control-llama-1B-GGUF | mradermacher | 2024-08-20T12:28:02Z | 64 | 0 | transformers | [
"transformers",
"gguf",
"goldfish-loss",
"memorization",
"mitigation",
"en",
"base_model:tomg-group-umd/control-llama-1B",
"base_model:quantized:tomg-group-umd/control-llama-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-08-20T12:16:20Z | ---
base_model: tomg-group-umd/control-llama-1B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- goldfish-loss
- memorization
- mitigation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tomg-group-umd/control-llama-1B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.IQ3_XS.gguf) | IQ3_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.IQ3_M.gguf) | IQ3_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/control-llama-1B-GGUF/resolve/main/control-llama-1B.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
GPT4All-Community/Yi-1.5-9B-Chat-16K-GGUF | GPT4All-Community | 2024-08-20T12:26:36Z | 688 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"GGUF",
"GPT4All",
"text-generation",
"en",
"zh",
"arxiv:2403.04652",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:quantized:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"region:us",
"conversational"
] | text-generation | 2024-07-26T11:07:03Z | ---
base_model: 01-ai/Yi-1.5-9B-Chat-16K
pipeline_tag: text-generation
inference: false
model_creator: 01-AI
model_name: Yi-1.5-9B-Chat-16K
model_type: llama
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: ThiloteE
tags:
- text-generation-inference
- transformers
- GGUF
- GPT4All
---
>[!Note]
>This is a model that is assumed to perform well, but may require more testing and user feedback. Be aware, only models featured within the GUI of GPT4All, are curated and officially supported by Nomic. Use at your own risk.
# About
<!-- ### quantize_version: 3 -->
<!-- ### convert_type: hf -->
- Static quants of https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K
- Quantized by [ThiloteE](https://huggingface.co/ThiloteE) with llama.cpp commit [c3776ca](https://github.com/ggerganov/llama.cpp/commit/c3776cacabce2ee35f172fb72be7a519752125fa)
These quants were created with a customized configuration that have been proven to not cause visible end of string (eos) tokens during inference with [GPT4All](https://www.nomic.ai/gpt4all).
The config.json, generation_config.json and tokenizer_config.json differ from the original configuration as can be found in the original model's repository at the time of creation of these quants.
# Prompt Template (for GPT4All)
System Prompt:
```
<|im_start|>system
Below is an instruction that describes a task. Write a response that appropriately completes the request.<|im_end|>
```
Chat Template:
```
<|im_start|>user
%1<|im_end|>
<|im_start|>assistant
%2<|im_end|>
```
Do not miss the newlines at the end!
# Context Length
`16384`
# Provided Quants
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pstella-16b-GGUF/resolve/main/pstella-16b.Q4_K_M.gguf) | Q4_0 | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pstella-16b-GGUF/resolve/main/pstella-16b.f16.gguf) | f16 | 17.2 | 16 bpw, overkill |
# About GGUF
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
Here is a handy graph by ikawrakow comparing some quant types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
# Thanks
I thank Mradermacher and TheBloke for Inspiration to this model card and their contributions to open source. I thank 3Simplex for everything.
Shoutout to the GPT4All and llama.cpp communities :-)
------
<!-- footer end -->
<!-- original-model-card start -->
# Original Model card:
---
license: apache-2.0
---
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">๐ GitHub</a> โข
<a href="https://discord.gg/hYUwWddeAu">๐พ Discord</a> โข
<a href="https://twitter.com/01ai_yi">๐ค Twitter</a> โข
<a href="https://github.com/01-ai/Yi-1.5/issues/2">๐ฌ WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">๐ Paper</a> โข
<a href="https://01-ai.github.io/">๐ช Tech Blog</a> โข
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">๐ FAQ</a> โข
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">๐ Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | โข [๐ค Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) โข [๐ค ModelScope](https://www.modelscope.cn/organization/01ai) โข [๐ฃ wisemodel](https://wisemodel.cn/organization/01.AI)|
| Yi-1.5-34B-Chat-16K | โข [๐ค Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) โข [๐ค ModelScope](https://www.modelscope.cn/organization/01ai) โข [๐ฃ wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-9B-Chat | โข [๐ค Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) โข [๐ค ModelScope](https://www.modelscope.cn/organization/01ai) โข [๐ฃ wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-9B-Chat-16K | โข [๐ค Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) โข [๐ค ModelScope](https://www.modelscope.cn/organization/01ai) โข [๐ฃ wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-6B-Chat | โข [๐ค Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) โข [๐ค ModelScope](https://www.modelscope.cn/organization/01ai) โข [๐ฃ wisemodel](https://wisemodel.cn/organization/01.AI) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | โข [๐ค Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) โข [๐ค ModelScope](https://www.modelscope.cn/organization/01ai) โข [๐ฃ wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-34B-32K | โข [๐ค Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) โข [๐ค ModelScope](https://www.modelscope.cn/organization/01ai) โข [๐ฃ wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-9B | โข [๐ค Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) โข [๐ค ModelScope](https://www.modelscope.cn/organization/01ai) โข [๐ฃ wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-9B-32K | โข [๐ค Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) โข [๐ค ModelScope](https://www.modelscope.cn/organization/01ai) โข [๐ฃ wisemodel](https://wisemodel.cn/organization/01.AI) |
| Yi-1.5-6B | โข [๐ค Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) โข [๐ค ModelScope](https://www.modelscope.cn/organization/01ai) โข [๐ฃ wisemodel](https://wisemodel.cn/organization/01.AI) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
<!-- original-model-card end -->
<!-- end -->
|
PrunaAI/ContactDoctor-Bio-Medical-Llama-3-8B-bnb-4bit-smashed | PrunaAI | 2024-08-20T12:24:57Z | 5 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"base_model:ContactDoctor/Bio-Medical-Llama-3-8B",
"base_model:quantized:ContactDoctor/Bio-Medical-Llama-3-8B",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-08-20T12:22:07Z | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ContactDoctor/Bio-Medical-Llama-3-8B
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ContactDoctor/Bio-Medical-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/ContactDoctor-Bio-Medical-Llama-3-8B-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ContactDoctor/Bio-Medical-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ContactDoctor/Bio-Medical-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
MaziyarPanahi/Qwen2-Math-72B-Instruct-GGUF | MaziyarPanahi | 2024-08-20T12:10:36Z | 109 | 2 | null | [
"gguf",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:Qwen/Qwen2-Math-72B-Instruct",
"base_model:quantized:Qwen/Qwen2-Math-72B-Instruct",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2024-08-08T14:33:29Z | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- text-generation
model_name: Qwen2-Math-72B-Instruct-GGUF
base_model: Qwen/Qwen2-Math-72B-Instruct
inference: false
model_creator: Qwen
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Qwen2-Math-72B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2-Math-72B-Instruct-GGUF)
- Model creator: [Qwen](https://huggingface.co/Qwen)
- Original model: [Qwen/Qwen2-Math-72B-Instruct](https://huggingface.co/Qwen/Qwen2-Math-72B-Instruct)
## Description
[MaziyarPanahi/Qwen2-Math-72B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2-Math-72B-Instruct-GGUF) contains GGUF format model files for [Qwen/Qwen2-Math-72B-Instruct](https://huggingface.co/Qwen/Qwen2-Math-72B-Instruct).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
๐ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
Hemg/gte-small-indo | Hemg | 2024-08-20T12:04:30Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-08-20T12:03:45Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 333 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 50,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 66,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Bruece/textual_inversion_cat | Bruece | 2024-08-20T11:55:41Z | 18 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-08-20T11:07:41Z | ---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - Bruece/textual_inversion_cat
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF | mradermacher | 2024-08-20T11:45:47Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Piotrasz/Llama-2-50-ROME-pl-FINAL",
"base_model:quantized:Piotrasz/Llama-2-50-ROME-pl-FINAL",
"endpoints_compatible",
"region:us"
] | null | 2024-08-20T11:20:16Z | ---
base_model: Piotrasz/Llama-2-50-ROME-pl-FINAL
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Piotrasz/Llama-2-50-ROME-pl-FINAL
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-2-50-ROME-pl-FINAL-GGUF/resolve/main/Llama-2-50-ROME-pl-FINAL.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jan-hq/Llama3.1-s-instruct-2024-08-19-epoch-4 | jan-hq | 2024-08-20T11:42:52Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-20T11:27:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nekuromento/Llama-3.1-WhiteRabbitNeo-2-8B-Q4_K_M-GGUF | Nekuromento | 2024-08-20T11:42:18Z | 9 | 1 | null | [
"gguf",
"Llama-3",
"finetune",
"llama-cpp",
"gguf-my-repo",
"base_model:WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B",
"base_model:quantized:WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-20T11:41:39Z | ---
base_model: WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B
license: llama3.1
tags:
- Llama-3
- finetune
- llama-cpp
- gguf-my-repo
---
# Nekuromento/Llama-3.1-WhiteRabbitNeo-2-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B`](https://huggingface.co/WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nekuromento/Llama-3.1-WhiteRabbitNeo-2-8B-Q4_K_M-GGUF --hf-file llama-3.1-whiterabbitneo-2-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nekuromento/Llama-3.1-WhiteRabbitNeo-2-8B-Q4_K_M-GGUF --hf-file llama-3.1-whiterabbitneo-2-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nekuromento/Llama-3.1-WhiteRabbitNeo-2-8B-Q4_K_M-GGUF --hf-file llama-3.1-whiterabbitneo-2-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nekuromento/Llama-3.1-WhiteRabbitNeo-2-8B-Q4_K_M-GGUF --hf-file llama-3.1-whiterabbitneo-2-8b-q4_k_m.gguf -c 2048
```
|
Nekuromento/Llama-3.1-WhiteRabbitNeo-2-8B-Q5_K_M-GGUF | Nekuromento | 2024-08-20T11:39:02Z | 5 | 0 | null | [
"gguf",
"Llama-3",
"finetune",
"llama-cpp",
"gguf-my-repo",
"base_model:WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B",
"base_model:quantized:WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-20T11:38:17Z | ---
base_model: WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B
license: llama3.1
tags:
- Llama-3
- finetune
- llama-cpp
- gguf-my-repo
---
# Nekuromento/Llama-3.1-WhiteRabbitNeo-2-8B-Q5_K_M-GGUF
This model was converted to GGUF format from [`WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B`](https://huggingface.co/WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/WhiteRabbitNeo/Llama-3.1-WhiteRabbitNeo-2-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nekuromento/Llama-3.1-WhiteRabbitNeo-2-8B-Q5_K_M-GGUF --hf-file llama-3.1-whiterabbitneo-2-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nekuromento/Llama-3.1-WhiteRabbitNeo-2-8B-Q5_K_M-GGUF --hf-file llama-3.1-whiterabbitneo-2-8b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nekuromento/Llama-3.1-WhiteRabbitNeo-2-8B-Q5_K_M-GGUF --hf-file llama-3.1-whiterabbitneo-2-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nekuromento/Llama-3.1-WhiteRabbitNeo-2-8B-Q5_K_M-GGUF --hf-file llama-3.1-whiterabbitneo-2-8b-q5_k_m.gguf -c 2048
```
|
KoichiYasuoka/roberta-small-belarusian-upos | KoichiYasuoka | 2024-08-20T11:35:58Z | 8 | 2 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"belarusian",
"pos",
"dependency-parsing",
"be",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/roberta-small-belarusian",
"base_model:finetune:KoichiYasuoka/roberta-small-belarusian",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-17T08:01:31Z | ---
language:
- "be"
tags:
- "belarusian"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/roberta-small-belarusian
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
---
# roberta-small-belarusian-upos
## Model Description
This is a RoBERTa model pre-trained with [UD_Belarusian](https://universaldependencies.org/be/) for POS-tagging and dependency-parsing, derived from [roberta-small-belarusian](https://huggingface.co/KoichiYasuoka/roberta-small-belarusian). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech) and [FEATS](https://universaldependencies.org/u/feat/).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-small-belarusian-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-small-belarusian-upos")
```
or
```
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-small-belarusian-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
KoichiYasuoka/deberta-base-korean-morph-upos | KoichiYasuoka | 2024-08-20T11:23:59Z | 120 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"korean",
"pos",
"dependency-parsing",
"ko",
"dataset:universal_dependencies",
"base_model:team-lucid/deberta-v3-base-korean",
"base_model:finetune:team-lucid/deberta-v3-base-korean",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-11-10T01:48:05Z | ---
language:
- "ko"
tags:
- "korean"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: team-lucid/deberta-v3-base-korean
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "ํ์ ๋ง์ด ๋์ ํ์๋ผ ์๊ฐํ๋ค."
- text: "็ด
ๆน ๋ง์ด ๋์ ็ด
ๆน๋ผ ็่ฆบํ๋ค."
---
# deberta-base-korean-morph-upos
## Model Description
This is a DeBERTa(V3) model pre-trained on Korean texts for POS-tagging and dependency-parsing, derived from [deberta-v3-base-korean](https://huggingface.co/team-lucid/deberta-v3-base-korean) and [morphUD-korean](https://github.com/jungyeul/morphUD-korean). Every morpheme (ํํ์) is tagged by [UPOS](https://universaldependencies.org/u/pos/)(Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-korean-morph-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/deberta-base-korean-morph-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("ํ์ ๋ง์ด ๋์ ํ์๋ผ ์๊ฐํ๋ค."))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/deberta-base-korean-morph-upos")
print(nlp("ํ์ ๋ง์ด ๋์ ํ์๋ผ ์๊ฐํ๋ค."))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
mradermacher/Bitnet-SmolLM-135M-GGUF | mradermacher | 2024-08-20T11:18:13Z | 72 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:abideen/Cosmopedia-100k-pretrain",
"base_model:ighoshsubho/Bitnet-SmolLM-135M",
"base_model:quantized:ighoshsubho/Bitnet-SmolLM-135M",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-20T11:11:15Z | ---
base_model: ighoshsubho/Bitnet-SmolLM-135M
datasets:
- abideen/Cosmopedia-100k-pretrain
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ighoshsubho/Bitnet-SmolLM-135M
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.IQ3_XS.gguf) | IQ3_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.IQ3_S.gguf) | IQ3_S | 0.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.IQ3_M.gguf) | IQ3_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Bitnet-SmolLM-135M-GGUF/resolve/main/Bitnet-SmolLM-135M.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mindsignal/upstage-SOLAR-10.7B-Instruct-v1.0-4bit-financesinfo-ver2 | mindsignal | 2024-08-20T11:14:20Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-20T11:10:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KoichiYasuoka/bert-large-japanese-unidic-luw-upos | KoichiYasuoka | 2024-08-20T11:13:43Z | 109 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"base_model:tohoku-nlp/bert-large-japanese",
"base_model:finetune:tohoku-nlp/bert-large-japanese",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
base_model: tohoku-nlp/bert-large-japanese
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "ๅฝๅขใฎ้ทใใใณใใซใๆใใใจ้ชๅฝใงใใฃใใ"
---
# bert-large-japanese-unidic-luw-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese](https://huggingface.co/tohoku-nlp/bert-large-japanese). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-unidic-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-unidic-luw-upos")
s="ๅฝๅขใฎ้ทใใใณใใซใๆใใใจ้ชๅฝใงใใฃใใ"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-large-japanese-unidic-luw-upos")
print(nlp("ๅฝๅขใฎ้ทใใใณใใซใๆใใใจ้ชๅฝใงใใฃใใ"))
```
[fugashi](https://pypi.org/project/fugashi) and [unidic-lite](https://pypi.org/project/unidic-lite) are required.
## Reference
ๅฎๅฒกๅญไธ: [Transformersใจๅฝ่ช็ ้ทๅไฝใซใใๆฅๆฌ่ชไฟใๅใ่งฃๆใขใใซใฎ่ฃฝไฝ](http://id.nii.ac.jp/1001/00216223/), ๆ
ๅ ฑๅฆ็ๅญฆไผ็ ็ฉถๅ ฑๅ, Vol.2022-CH-128, No.7 (2022ๅนด2ๆ), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
mradermacher/3-goldfish-loss-llama-1B-GGUF | mradermacher | 2024-08-20T11:12:16Z | 103 | 0 | transformers | [
"transformers",
"gguf",
"goldfish-loss",
"memorization",
"mitigation",
"en",
"dataset:tomg-group-umd/wikipedia-en-2k-samples",
"base_model:tomg-group-umd/3-goldfish-loss-llama-1B",
"base_model:quantized:tomg-group-umd/3-goldfish-loss-llama-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-08-20T11:07:07Z | ---
base_model: tomg-group-umd/3-goldfish-loss-llama-1B
datasets:
- tomg-group-umd/wikipedia-en-2k-samples
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- goldfish-loss
- memorization
- mitigation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tomg-group-umd/3-goldfish-loss-llama-1B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.IQ3_XS.gguf) | IQ3_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.IQ3_M.gguf) | IQ3_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/3-goldfish-loss-llama-1B-GGUF/resolve/main/3-goldfish-loss-llama-1B.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
KoichiYasuoka/bert-base-japanese-unidic-luw-upos | KoichiYasuoka | 2024-08-20T11:11:51Z | 121 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"japanese",
"pos",
"wikipedia",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"base_model:tohoku-nlp/bert-base-japanese-v2",
"base_model:finetune:tohoku-nlp/bert-base-japanese-v2",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-03-02T23:29:04Z | ---
language:
- "ja"
tags:
- "japanese"
- "token-classification"
- "pos"
- "wikipedia"
- "dependency-parsing"
base_model: tohoku-nlp/bert-base-japanese-v2
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "ๅฝๅขใฎ้ทใใใณใใซใๆใใใจ้ชๅฝใงใใฃใใ"
---
# bert-base-japanese-unidic-luw-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-japanese-v2](https://huggingface.co/tohoku-nlp/bert-base-japanese-v2). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
s="ๅฝๅขใฎ้ทใใใณใใซใๆใใใจ้ชๅฝใงใใฃใใ"
t=tokenizer.tokenize(s)
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(t,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-japanese-unidic-luw-upos")
print(nlp("ๅฝๅขใฎ้ทใใใณใใซใๆใใใจ้ชๅฝใงใใฃใใ"))
```
[fugashi](https://pypi.org/project/fugashi) and [unidic-lite](https://pypi.org/project/unidic-lite) are required.
## Reference
ๅฎๅฒกๅญไธ: [Transformersใจๅฝ่ช็ ้ทๅไฝใซใใๆฅๆฌ่ชไฟใๅใ่งฃๆใขใใซใฎ่ฃฝไฝ](http://id.nii.ac.jp/1001/00216223/), ๆ
ๅ ฑๅฆ็ๅญฆไผ็ ็ฉถๅ ฑๅ, Vol.2022-CH-128, No.7 (2022ๅนด2ๆ), pp.1-8.
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
mradermacher/mistral-12b-neptune-6k-instruct-GGUF | mradermacher | 2024-08-20T11:11:31Z | 46 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Pirr/mistral-12b-neptune-6k-instruct",
"base_model:quantized:Pirr/mistral-12b-neptune-6k-instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-20T06:33:00Z | ---
base_model: Pirr/mistral-12b-neptune-6k-instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Pirr/mistral-12b-neptune-6k-instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/mistral-12b-neptune-6k-instruct-GGUF/resolve/main/mistral-12b-neptune-6k-instruct.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
KasuleTrevor/wav2vec2-xls-r-300m-nyn_filtered-yogera-v3 | KasuleTrevor | 2024-08-20T11:11:27Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-08-19T00:33:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KoichiYasuoka/deberta-large-japanese-unidic-ud-head | KoichiYasuoka | 2024-08-20T11:10:26Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"question-answering",
"japanese",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/deberta-large-japanese-unidic",
"base_model:finetune:KoichiYasuoka/deberta-large-japanese-unidic",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-19T00:10:56Z | ---
language:
- "ja"
tags:
- "japanese"
- "question-answering"
- "dependency-parsing"
base_model: KoichiYasuoka/deberta-large-japanese-unidic
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
inference:
parameters:
align_to_words: false
widget:
- text: "ๅฝ่ช"
context: "ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
- text: "ๆ็งๆธ"
context: "ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
- text: "ใฎ"
context: "ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ช[MASK]ๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
---
# deberta-large-japanese-unidic-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on ้็ฉบๆๅบซ for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [deberta-large-japanese-unidic](https://huggingface.co/KoichiYasuoka/deberta-large-japanese-unidic) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForQuestionAnswering
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-large-japanese-unidic-ud-head")
question="ๅฝ่ช"
context="ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
inputs=tokenizer(question,context,return_tensors="pt")
outputs=model(**inputs)
start,end=torch.argmax(outputs.start_logits),torch.argmax(outputs.end_logits)
print(tokenizer.convert_ids_to_tokens(inputs["input_ids"][0,start:end+1]))
```
or
```py
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
class TaggerPipeline(TokenClassificationPipeline):
def __call__(self,text):
d=super().__call__(text)
if len(d)>0 and ("start" not in d[0] or d[0]["start"]==None):
import spacy_alignments as tokenizations
v=[x["word"].replace(" ","") for x in d]
a2b,b2a=tokenizations.get_alignments(v,text)
for i,t in enumerate(a2b):
s,e=(0,0) if t==[] else (t[0],t[-1]+1)
if v[i].startswith(self.tokenizer.unk_token):
s=([[-1]]+[x for x in a2b[0:i] if x>[]])[-1][-1]+1
if v[i].endswith(self.tokenizer.unk_token):
e=([x for x in a2b[i+1:] if x>[]]+[[len(text)]])[0][0]
d[i]["start"],d[i]["end"]=s,e
return d
class TransformersSlowUD(object):
def __init__(self,bert):
import os
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.utils import cached_file
c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json"))
d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json"))
t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TaggerPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TaggerPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersSlowUD("KoichiYasuoka/deberta-large-japanese-unidic-ud-head")
print(nlp("ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"))
```
[fugashi](https://pypi.org/project/fugashi) [unidic-lite](https://pypi.org/project/unidic-lite) [spacy-alignments](https://pypi.org/project/spacy-alignments) and [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/) required.
|
mradermacher/Llama-3.1-Storm-8B-i1-GGUF | mradermacher | 2024-08-20T11:02:38Z | 89 | 3 | transformers | [
"transformers",
"gguf",
"llama-3.1",
"conversational",
"instruction following",
"reasoning",
"function calling",
"mergekit",
"finetuning",
"axolotl",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:akjindal53244/Llama-3.1-Storm-8B",
"base_model:quantized:akjindal53244/Llama-3.1-Storm-8B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2024-08-20T06:18:35Z | ---
base_model: akjindal53244/Llama-3.1-Storm-8B
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- llama-3.1
- conversational
- instruction following
- reasoning
- function calling
- mergekit
- finetuning
- axolotl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-Storm-8B-i1-GGUF/resolve/main/Llama-3.1-Storm-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
shreedharsmurnal/Meta-Llama-3.1-8B-supervised-Instruction-fine-tuned | shreedharsmurnal | 2024-08-20T10:53:53Z | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-08-20T10:33:14Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KoichiYasuoka/roberta-base-thai-spm-ud-head | KoichiYasuoka | 2024-08-20T10:53:42Z | 126 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"thai",
"dependency-parsing",
"th",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/roberta-base-thai-spm",
"base_model:finetune:KoichiYasuoka/roberta-base-thai-spm",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-07-16T15:00:05Z | ---
language:
- "th"
tags:
- "thai"
- "question-answering"
- "dependency-parsing"
base_model: KoichiYasuoka/roberta-base-thai-spm
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "question-answering"
inference:
parameters:
align_to_words: false
widget:
- text: "เธเธงเนเธฒ"
context: "เธซเธฅเธฒเธขเธซเธฑเธงเธเธตเธเธงเนเธฒเธซเธฑเธงเนเธเธตเธขเธง"
- text: "เธซเธฅเธฒเธข"
context: "เธซเธฅเธฒเธขเธซเธฑเธงเธเธตเธเธงเนเธฒเธซเธฑเธงเนเธเธตเธขเธง"
- text: "เธซเธฑเธง"
context: "เธซเธฅเธฒเธข[MASK]เธเธตเธเธงเนเธฒเธซเธฑเธงเนเธเธตเธขเธง"
---
# roberta-base-thai-spm-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on Thai Wikipedia texts for dependency-parsing (head-detection on Universal Dependencies) as question-answering, derived from [roberta-base-thai-spm](https://huggingface.co/KoichiYasuoka/roberta-base-thai-spm). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/roberta-base-thai-spm-ud-head")
qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False)
print(qap(question="เธเธงเนเธฒ",context="เธซเธฅเธฒเธขเธซเธฑเธงเธเธตเธเธงเนเธฒเธซเธฑเธงเนเธเธตเธขเธง"))
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.utils import cached_file
c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json"))
d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json"))
t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/roberta-base-thai-spm-ud-head")
print(nlp("เธซเธฅเธฒเธขเธซเธฑเธงเธเธตเธเธงเนเธฒเธซเธฑเธงเนเธเธตเธขเธง"))
```
|
SicariusSicariiStuff/PygmalionAI_pygmalion-7b-EXL2-8.0bpw | SicariusSicariiStuff | 2024-08-20T10:50:45Z | 5 | 0 | null | [
"safetensors",
"llama",
"text generation",
"conversational",
"text-generation",
"en",
"8-bit",
"exl2",
"region:us"
] | text-generation | 2024-08-20T10:22:25Z | ---
language:
- en
thumbnail: null
tags:
- text generation
- conversational
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Pygmalion 7B</h1>
<h2 style="text-align: center">A conversational LLaMA fine-tune.</h2>
## Model Details
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-7b
Pygmalion 7B is a dialogue model based on Meta's LLaMA-7B.
This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4, for those of you familiar with the project.
## Prompting
The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
<START>
[DIALOGUE HISTORY]
You: [User's input message here]
[CHARACTER]:
```
Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example:
```
Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests.
<START>
Assistant: Hello! How may I help you today?
You: What is Zork?
Assistant:
```
Which will generate something like:
```
Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years."
```
The model will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
## Limitations and biases
The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Neko-Institute-of-Science__pygmalion-7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 40.29 |
| ARC (25-shot) | 51.37 |
| HellaSwag (10-shot) | 77.81 |
| MMLU (5-shot) | 35.68 |
| TruthfulQA (0-shot) | 34.54 |
| Winogrande (5-shot) | 72.22 |
| GSM8K (5-shot) | 4.62 |
| DROP (3-shot) | 5.79 |
|
BogdanTurbal/model_roberta_large_d_hate_bias_political_bias_ep_2_6_a_sqn_a_b_p_100_5_v_13 | BogdanTurbal | 2024-08-20T10:49:50Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:BogdanTurbal/model_roberta_large_d_hate_bias_ep_2_sqn_a_p_100_v_13",
"base_model:finetune:BogdanTurbal/model_roberta_large_d_hate_bias_ep_2_sqn_a_p_100_v_13",
"license:mit",
"region:us"
] | null | 2024-08-20T10:49:10Z | ---
license: mit
base_model: BogdanTurbal/model_roberta_large_d_hate_bias_ep_2_sqn_a_p_100_v_13
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_roberta_large_d_hate_bias_political_bias_ep_2_6_a_sqn_a_b_p_100_5_v_13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_roberta_large_d_hate_bias_political_bias_ep_2_6_a_sqn_a_b_p_100_5_v_13
This model is a fine-tuned version of [BogdanTurbal/model_roberta_large_d_hate_bias_ep_2_sqn_a_p_100_v_13](https://huggingface.co/BogdanTurbal/model_roberta_large_d_hate_bias_ep_2_sqn_a_p_100_v_13) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4746
- Accuracy: 0.7285
- F1 Micro: 0.7285
- Auc: 0.7994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.657 | 1.0 | 37 | 0.5929 | 0.7010 | 0.7010 | 0.7812 |
| 0.4786 | 2.0 | 74 | 0.5865 | 0.7268 | 0.7268 | 0.7992 |
| 0.3638 | 3.0 | 111 | 0.9899 | 0.6899 | 0.6899 | 0.8014 |
| 0.167 | 4.0 | 148 | 0.9127 | 0.7311 | 0.7311 | 0.7948 |
| 0.0297 | 5.0 | 185 | 1.2450 | 0.7268 | 0.7268 | 0.7976 |
| 0.1468 | 6.0 | 222 | 1.4746 | 0.7285 | 0.7285 | 0.7994 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
KoichiYasuoka/bert-large-japanese-wikipedia-ud-head | KoichiYasuoka | 2024-08-20T10:45:52Z | 127 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"japanese",
"wikipedia",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/bert-large-japanese-char-extended",
"base_model:finetune:KoichiYasuoka/bert-large-japanese-char-extended",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-21T07:38:19Z | ---
language:
- "ja"
tags:
- "japanese"
- "wikipedia"
- "question-answering"
- "dependency-parsing"
base_model: KoichiYasuoka/bert-large-japanese-char-extended
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
inference:
parameters:
align_to_words: false
widget:
- text: "ๅฝ่ช"
context: "ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
- text: "ๆ็งๆธ"
context: "ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
- text: "ใฎ"
context: "ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ช[MASK]ๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
---
# bert-large-japanese-wikipedia-ud-head
## Model Description
This is a BERT model pretrained on Japanese Wikipedia texts for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-wikipedia-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/bert-large-japanese-wikipedia-ud-head")
qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False)
print(qap(question="ๅฝ่ช",context="ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"))
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.utils import cached_file
c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json"))
d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json"))
t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/bert-large-japanese-wikipedia-ud-head")
print(nlp("ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"))
```
|
KoichiYasuoka/bert-base-japanese-wikipedia-ud-head | KoichiYasuoka | 2024-08-20T10:44:30Z | 351 | 1 | transformers | [
"transformers",
"pytorch",
"bert",
"question-answering",
"japanese",
"wikipedia",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/bert-base-japanese-char-extended",
"base_model:finetune:KoichiYasuoka/bert-base-japanese-char-extended",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-20T21:58:52Z | ---
language:
- "ja"
tags:
- "japanese"
- "wikipedia"
- "question-answering"
- "dependency-parsing"
base_model: KoichiYasuoka/bert-base-japanese-char-extended
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
inference:
parameters:
align_to_words: false
widget:
- text: "ๅฝ่ช"
context: "ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
- text: "ๆ็งๆธ"
context: "ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
- text: "ใฎ"
context: "ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ช[MASK]ๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
---
# bert-base-japanese-wikipedia-ud-head
## Model Description
This is a BERT model pretrained on Japanese Wikipedia texts for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [bert-base-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-base-japanese-char-extended) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-wikipedia-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/bert-base-japanese-wikipedia-ud-head")
qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False)
print(qap(question="ๅฝ่ช",context="ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"))
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.utils import cached_file
c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json"))
d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json"))
t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/bert-base-japanese-wikipedia-ud-head")
print(nlp("ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"))
```
|
mergekit-community/CV_FLGD | mergekit-community | 2024-08-20T10:44:28Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-20T10:41:36Z | ---
base_model:
- meta-llama/Meta-Llama-3-8B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: meta-llama/Meta-Llama-3-8B
layer_range:
- 0
- 32
- model: meta-llama/Meta-Llama-3-8B
layer_range:
- 0
- 32
merge_method: slerp
base_model: meta-llama/Meta-Llama-3-8B
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
```
|
jiyeonkim/llava-tulu2dpo-ckpt-200 | jiyeonkim | 2024-08-20T10:41:21Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llava",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-08-20T10:37:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KoichiYasuoka/deberta-base-japanese-aozora-ud-head | KoichiYasuoka | 2024-08-20T10:40:28Z | 536 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"question-answering",
"japanese",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/deberta-base-japanese-aozora",
"base_model:finetune:KoichiYasuoka/deberta-base-japanese-aozora",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-06-15T04:02:27Z | ---
language:
- "ja"
tags:
- "japanese"
- "question-answering"
- "dependency-parsing"
base_model: KoichiYasuoka/deberta-base-japanese-aozora
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
inference:
parameters:
align_to_words: false
widget:
- text: "ๅฝ่ช"
context: "ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
- text: "ๆ็งๆธ"
context: "ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
- text: "ใฎ"
context: "ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ช[MASK]ๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"
---
# deberta-base-japanese-aozora-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on ้็ฉบๆๅบซ for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [deberta-base-japanese-aozora](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-aozora) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-base-japanese-aozora-ud-head")
qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False)
print(qap(question="ๅฝ่ช",context="ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"))
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.utils import cached_file
c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json"))
d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json"))
t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/deberta-base-japanese-aozora-ud-head")
print(nlp("ๅ
จๅญฆๅนดใซใใใฃใฆๅฐๅญฆๆ กใฎๅฝ่ชใฎๆ็งๆธใซๆฟใ็ตตใ็จใใใใฆใใ"))
```
## Reference
ๅฎๅฒกๅญไธ: [้็ฉบๆๅบซDeBERTaใขใใซใซใใๅฝ่ช็ ้ทๅไฝไฟใๅใ่งฃๆ](http://hdl.handle.net/2433/275409), ๆฑๆดๅญฆใธใฎใณใณใใฅใผใฟๅฉ็จ, ็ฌฌ35ๅ็ ็ฉถใปใใใผ (2022ๅนด7ๆ), pp.29-43.
|
SicariusSicariiStuff/PygmalionAI_pygmalion-7b-EXL2-5.0bpw | SicariusSicariiStuff | 2024-08-20T10:38:08Z | 7 | 0 | null | [
"safetensors",
"llama",
"text generation",
"conversational",
"text-generation",
"en",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-08-20T10:21:47Z | ---
language:
- en
thumbnail: null
tags:
- text generation
- conversational
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Pygmalion 7B</h1>
<h2 style="text-align: center">A conversational LLaMA fine-tune.</h2>
## Model Details
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-7b
Pygmalion 7B is a dialogue model based on Meta's LLaMA-7B.
This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4, for those of you familiar with the project.
## Prompting
The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
<START>
[DIALOGUE HISTORY]
You: [User's input message here]
[CHARACTER]:
```
Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example:
```
Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests.
<START>
Assistant: Hello! How may I help you today?
You: What is Zork?
Assistant:
```
Which will generate something like:
```
Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years."
```
The model will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
## Limitations and biases
The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Neko-Institute-of-Science__pygmalion-7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 40.29 |
| ARC (25-shot) | 51.37 |
| HellaSwag (10-shot) | 77.81 |
| MMLU (5-shot) | 35.68 |
| TruthfulQA (0-shot) | 34.54 |
| Winogrande (5-shot) | 72.22 |
| GSM8K (5-shot) | 4.62 |
| DROP (3-shot) | 5.79 |
|
QuantFactory/HelpingAI-3B-coder-GGUF | QuantFactory | 2024-08-20T10:37:51Z | 71 | 2 | null | [
"gguf",
"HelpingAI",
"Emotionally Intelligent",
"EQ",
"Coding",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-08-20T10:13:56Z |
---
license: other
license_name: helpingai
license_link: LICENSE.md
pipeline_tag: text-generation
tags:
- HelpingAI
- Emotionally Intelligent
- EQ
- Coding
---

# QuantFactory/HelpingAI-3B-coder-GGUF
This is quantized version of [OEvortex/HelpingAI-3B-coder](https://huggingface.co/OEvortex/HelpingAI-3B-coder) created using llama.cpp
# Original Model Card
# HelpingAI-3B-coder: Emotionally Intelligent Conversational AI with Coding Capabilities

## Overview
HelpingAI-3B-coder is a large language model designed for emotionally intelligent conversational interactions and coding assistance. It is trained to engage users with empathy, understanding, and supportive dialogue across a wide range of topics and contexts, while also providing reliable coding support. The model aims to be a supportive AI companion that can attune to users' emotional states, communicative needs, and coding requirements.
## Objectives
- Engage in open-ended dialogue while displaying emotional intelligence
- Recognize and validate user emotions and emotional contexts
- Provide supportive, empathetic, and psychologically-grounded responses
- Avoid insensitive, harmful, or unethical speech
- Assist users with coding tasks and programming-related queries
- Continuously improve emotional awareness, dialogue skills, and coding capabilities
## Methodology
HelpingAI-3B-coder is based on the HelpingAI series and further trained using:
- Supervised learning on large dialogue datasets with emotional labeling
- Reinforcement learning with a reward model favoring emotionally supportive responses
- Constitution training to instill stable and beneficial objectives
- Knowledge augmentation from psychological resources on emotional intelligence
- Coding datasets to enhance programming support
## Emotional Quotient (EQ)
HelpingAI-3B-coder has achieved an impressive Emotional Quotient (EQ), surpassing almost all AI models in emotional intelligence. This EQ score reflects its advanced ability to understand and respond to human emotions in a supportive and empathetic manner.
## Usage Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
# Load the HelpingAI-3B-coder model
model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI-3B-coder", trust_remote_code=True).to("cuda")
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-3B-coder", trust_remote_code=True)
# Initialize TextStreamer for smooth conversation flow
streamer = TextStreamer(tokenizer)
# Define the chat input
chat = [
{ "role": "system", "content": "You are HelpingAI, an emotionally intelligent AI. Always respond in the HelpingAI style. Provide concise and to-the-point answers." },
{ "role": "user", "content": "Can you help me write a Python function to reverse a string?" }
]
# Apply the chat template
chat_text = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# Tokenize the text
inputs = tokenizer(chat_text, return_tensors="pt", return_attention_mask=False).to("cuda")
# Generate text
generated_text = model.generate(
**inputs,
max_length=500,
top_p=0.95,
do_sample=True,
temperature=0.7,
use_cache=True,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer
)
# # Decode the generated text
# output_text = tokenizer.decode(generated_text[0], skip_special_tokens=True)
# # Print the generated text
# print(output_text)
# System:
# You are HelpingAI, an emotional AI that always answers questions in HelpingAI style and always be to the point and answer as short as possible.
# Question:
# Can you help me write a Python function to reverse a string?
# Answer:
# Sure! Here's a Python function that reverses a string:
# ```python
# def reverse_string(input_string):
# return input_string[::-1]
# ```
# This function uses slicing with a negative step to reverse the input string. The `[::-1]` syntax reverses the string by starting from the end and going backwards.
# You can use this function like this:
# ```python
# print(reverse_string("Hello, World!"))
# ```
# This will output: `!dlroW,olleH`
# I hope this helps! Let me know if you have any other questions.
```
|
quangtqv/bge_bi_encoder_20_8 | quangtqv | 2024-08-20T10:36:44Z | 46 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-08-20T10:36:29Z | ---
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
widget: []
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("quangtqv/bge_bi_encoder_20_8")
# Run inference
sentences = [
'The weather is lovely today.',
"It's so sunny outside!",
'He drove to the stadium.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.42.3
- PyTorch: 2.1.2
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Cheng98/deberta-v3-large | Cheng98 | 2024-08-20T10:31:56Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-08-20T10:31:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Geetansh13/Florence2_Sleeve | Geetansh13 | 2024-08-20T10:31:30Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-08-20T10:30:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SicariusSicariiStuff/PygmalionAI_pygmalion-7b-EXL2-3.0bpw | SicariusSicariiStuff | 2024-08-20T10:30:50Z | 5 | 0 | null | [
"safetensors",
"llama",
"text generation",
"conversational",
"text-generation",
"en",
"3-bit",
"exl2",
"region:us"
] | text-generation | 2024-08-20T10:21:07Z | ---
language:
- en
thumbnail: null
tags:
- text generation
- conversational
pipeline_tag: text-generation
inference: false
---
<h1 style="text-align: center">Pygmalion 7B</h1>
<h2 style="text-align: center">A conversational LLaMA fine-tune.</h2>
## Model Details
Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-7b
Pygmalion 7B is a dialogue model based on Meta's LLaMA-7B.
This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4, for those of you familiar with the project.
## Prompting
The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
<START>
[DIALOGUE HISTORY]
You: [User's input message here]
[CHARACTER]:
```
Where `[CHARACTER]` is, as you can probably guess, the name of the character you want the model to portray, `<START>` should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and `[DIALOGUE HISTORY]` is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example:
```
Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests.
<START>
Assistant: Hello! How may I help you today?
You: What is Zork?
Assistant:
```
Which will generate something like:
```
Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years."
```
The model will automatically emit an end-of-text token (`</s>`) when it judges that the response is complete.
## Limitations and biases
The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Neko-Institute-of-Science__pygmalion-7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 40.29 |
| ARC (25-shot) | 51.37 |
| HellaSwag (10-shot) | 77.81 |
| MMLU (5-shot) | 35.68 |
| TruthfulQA (0-shot) | 34.54 |
| Winogrande (5-shot) | 72.22 |
| GSM8K (5-shot) | 4.62 |
| DROP (3-shot) | 5.79 |
|
Cheng98/deberta-v3-base | Cheng98 | 2024-08-20T10:30:29Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-08-20T10:29:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
priyesh17/FineTuned_Code_Llama3.1 | priyesh17 | 2024-08-20T10:26:34Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"Code Assistant",
"CodeLlama",
"text-generation",
"en",
"dataset:openbmb/UltraInteract_sft",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-08-14T07:43:59Z | ---
## CodeLlama
license: apache-2.0
datasets:
- openbmb/UltraInteract_sft
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- Code Assistant
- CodeLlama
--- |
fairuzafnan/donut-docvqa | fairuzafnan | 2024-08-20T10:25:01Z | 50 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-08-20T10:23:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AaronKolibrain/donut_oppel_id_logistics_full_adress | AaronKolibrain | 2024-08-20T10:24:18Z | 21 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-07-11T16:00:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
srikarvar/multilingual-e5-small-pairclass-4 | srikarvar | 2024-08-20T10:17:51Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:971",
"loss:OnlineContrastiveLoss",
"arxiv:1908.10084",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-08-20T10:17:17Z | ---
base_model: intfloat/multilingual-e5-small
datasets: []
language: []
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- dot_accuracy
- dot_accuracy_threshold
- dot_f1
- dot_f1_threshold
- dot_precision
- dot_recall
- dot_ap
- manhattan_accuracy
- manhattan_accuracy_threshold
- manhattan_f1
- manhattan_f1_threshold
- manhattan_precision
- manhattan_recall
- manhattan_ap
- euclidean_accuracy
- euclidean_accuracy_threshold
- euclidean_f1
- euclidean_f1_threshold
- euclidean_precision
- euclidean_recall
- euclidean_ap
- max_accuracy
- max_accuracy_threshold
- max_f1
- max_f1_threshold
- max_precision
- max_recall
- max_ap
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:971
- loss:OnlineContrastiveLoss
widget:
- source_sentence: Steps to bake a pie
sentences:
- How to bake a pie?
- What are the ingredients of a pizza?
- How to create a business plan?
- source_sentence: What are the benefits of yoga?
sentences:
- If I combine the yellow and blue colors, what color will I get?
- Can you help me understand this contract?
- What are the benefits of meditation?
- source_sentence: Capital city of Canada
sentences:
- What time does the movie start?
- Who is the President of the United States?
- What is the capital of Canada?
- source_sentence: Tell me about Shopify
sentences:
- Who discovered penicillin?
- Share info about Shopify
- Who invented the telephone?
- source_sentence: What is the melting point of ice at sea level?
sentences:
- What is the boiling point of water at sea level?
- Can you recommend a good restaurant nearby?
- Tell me a joke
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-small
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: pair class dev
type: pair-class-dev
metrics:
- type: cosine_accuracy
value: 0.9300411522633745
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.788658857345581
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9237668161434978
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7819762825965881
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.8956521739130435
name: Cosine Precision
- type: cosine_recall
value: 0.9537037037037037
name: Cosine Recall
- type: cosine_ap
value: 0.9603135110633257
name: Cosine Ap
- type: dot_accuracy
value: 0.9300411522633745
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 0.788658857345581
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.9237668161434978
name: Dot F1
- type: dot_f1_threshold
value: 0.7819762229919434
name: Dot F1 Threshold
- type: dot_precision
value: 0.8956521739130435
name: Dot Precision
- type: dot_recall
value: 0.9537037037037037
name: Dot Recall
- type: dot_ap
value: 0.9603135110633257
name: Dot Ap
- type: manhattan_accuracy
value: 0.9218106995884774
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 9.936657905578613
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.914798206278027
name: Manhattan F1
- type: manhattan_f1_threshold
value: 10.316186904907227
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.8869565217391304
name: Manhattan Precision
- type: manhattan_recall
value: 0.9444444444444444
name: Manhattan Recall
- type: manhattan_ap
value: 0.9578931449470002
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.9300411522633745
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 0.6501401662826538
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.9237668161434978
name: Euclidean F1
- type: euclidean_f1_threshold
value: 0.6603381633758545
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.8956521739130435
name: Euclidean Precision
- type: euclidean_recall
value: 0.9537037037037037
name: Euclidean Recall
- type: euclidean_ap
value: 0.9603135110633257
name: Euclidean Ap
- type: max_accuracy
value: 0.9300411522633745
name: Max Accuracy
- type: max_accuracy_threshold
value: 9.936657905578613
name: Max Accuracy Threshold
- type: max_f1
value: 0.9237668161434978
name: Max F1
- type: max_f1_threshold
value: 10.316186904907227
name: Max F1 Threshold
- type: max_precision
value: 0.8956521739130435
name: Max Precision
- type: max_recall
value: 0.9537037037037037
name: Max Recall
- type: max_ap
value: 0.9603135110633257
name: Max Ap
- task:
type: binary-classification
name: Binary Classification
dataset:
name: pair class test
type: pair-class-test
metrics:
- type: cosine_accuracy
value: 0.9300411522633745
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.788658857345581
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9237668161434978
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7819762825965881
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.8956521739130435
name: Cosine Precision
- type: cosine_recall
value: 0.9537037037037037
name: Cosine Recall
- type: cosine_ap
value: 0.9603135110633257
name: Cosine Ap
- type: dot_accuracy
value: 0.9300411522633745
name: Dot Accuracy
- type: dot_accuracy_threshold
value: 0.788658857345581
name: Dot Accuracy Threshold
- type: dot_f1
value: 0.9237668161434978
name: Dot F1
- type: dot_f1_threshold
value: 0.7819762229919434
name: Dot F1 Threshold
- type: dot_precision
value: 0.8956521739130435
name: Dot Precision
- type: dot_recall
value: 0.9537037037037037
name: Dot Recall
- type: dot_ap
value: 0.9603135110633257
name: Dot Ap
- type: manhattan_accuracy
value: 0.9218106995884774
name: Manhattan Accuracy
- type: manhattan_accuracy_threshold
value: 9.936657905578613
name: Manhattan Accuracy Threshold
- type: manhattan_f1
value: 0.914798206278027
name: Manhattan F1
- type: manhattan_f1_threshold
value: 10.316186904907227
name: Manhattan F1 Threshold
- type: manhattan_precision
value: 0.8869565217391304
name: Manhattan Precision
- type: manhattan_recall
value: 0.9444444444444444
name: Manhattan Recall
- type: manhattan_ap
value: 0.9578931449470002
name: Manhattan Ap
- type: euclidean_accuracy
value: 0.9300411522633745
name: Euclidean Accuracy
- type: euclidean_accuracy_threshold
value: 0.6501401662826538
name: Euclidean Accuracy Threshold
- type: euclidean_f1
value: 0.9237668161434978
name: Euclidean F1
- type: euclidean_f1_threshold
value: 0.6603381633758545
name: Euclidean F1 Threshold
- type: euclidean_precision
value: 0.8956521739130435
name: Euclidean Precision
- type: euclidean_recall
value: 0.9537037037037037
name: Euclidean Recall
- type: euclidean_ap
value: 0.9603135110633257
name: Euclidean Ap
- type: max_accuracy
value: 0.9300411522633745
name: Max Accuracy
- type: max_accuracy_threshold
value: 9.936657905578613
name: Max Accuracy Threshold
- type: max_f1
value: 0.9237668161434978
name: Max F1
- type: max_f1_threshold
value: 10.316186904907227
name: Max F1 Threshold
- type: max_precision
value: 0.8956521739130435
name: Max Precision
- type: max_recall
value: 0.9537037037037037
name: Max Recall
- type: max_ap
value: 0.9603135110633257
name: Max Ap
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("srikarvar/multilingual-e5-small-pairclass-4")
# Run inference
sentences = [
'What is the melting point of ice at sea level?',
'What is the boiling point of water at sea level?',
'Can you recommend a good restaurant nearby?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `pair-class-dev`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.93 |
| cosine_accuracy_threshold | 0.7887 |
| cosine_f1 | 0.9238 |
| cosine_f1_threshold | 0.782 |
| cosine_precision | 0.8957 |
| cosine_recall | 0.9537 |
| cosine_ap | 0.9603 |
| dot_accuracy | 0.93 |
| dot_accuracy_threshold | 0.7887 |
| dot_f1 | 0.9238 |
| dot_f1_threshold | 0.782 |
| dot_precision | 0.8957 |
| dot_recall | 0.9537 |
| dot_ap | 0.9603 |
| manhattan_accuracy | 0.9218 |
| manhattan_accuracy_threshold | 9.9367 |
| manhattan_f1 | 0.9148 |
| manhattan_f1_threshold | 10.3162 |
| manhattan_precision | 0.887 |
| manhattan_recall | 0.9444 |
| manhattan_ap | 0.9579 |
| euclidean_accuracy | 0.93 |
| euclidean_accuracy_threshold | 0.6501 |
| euclidean_f1 | 0.9238 |
| euclidean_f1_threshold | 0.6603 |
| euclidean_precision | 0.8957 |
| euclidean_recall | 0.9537 |
| euclidean_ap | 0.9603 |
| max_accuracy | 0.93 |
| max_accuracy_threshold | 9.9367 |
| max_f1 | 0.9238 |
| max_f1_threshold | 10.3162 |
| max_precision | 0.8957 |
| max_recall | 0.9537 |
| **max_ap** | **0.9603** |
#### Binary Classification
* Dataset: `pair-class-test`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:-----------------------------|:-----------|
| cosine_accuracy | 0.93 |
| cosine_accuracy_threshold | 0.7887 |
| cosine_f1 | 0.9238 |
| cosine_f1_threshold | 0.782 |
| cosine_precision | 0.8957 |
| cosine_recall | 0.9537 |
| cosine_ap | 0.9603 |
| dot_accuracy | 0.93 |
| dot_accuracy_threshold | 0.7887 |
| dot_f1 | 0.9238 |
| dot_f1_threshold | 0.782 |
| dot_precision | 0.8957 |
| dot_recall | 0.9537 |
| dot_ap | 0.9603 |
| manhattan_accuracy | 0.9218 |
| manhattan_accuracy_threshold | 9.9367 |
| manhattan_f1 | 0.9148 |
| manhattan_f1_threshold | 10.3162 |
| manhattan_precision | 0.887 |
| manhattan_recall | 0.9444 |
| manhattan_ap | 0.9579 |
| euclidean_accuracy | 0.93 |
| euclidean_accuracy_threshold | 0.6501 |
| euclidean_f1 | 0.9238 |
| euclidean_f1_threshold | 0.6603 |
| euclidean_precision | 0.8957 |
| euclidean_recall | 0.9537 |
| euclidean_ap | 0.9603 |
| max_accuracy | 0.93 |
| max_accuracy_threshold | 9.9367 |
| max_f1 | 0.9238 |
| max_f1_threshold | 10.3162 |
| max_precision | 0.8957 |
| max_recall | 0.9537 |
| **max_ap** | **0.9603** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 971 training samples
* Columns: <code>sentence2</code>, <code>sentence1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence2 | sentence1 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.12 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.82 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>0: ~48.61%</li><li>1: ~51.39%</li></ul> |
* Samples:
| sentence2 | sentence1 | label |
|:----------------------------------------------------------|:--------------------------------------------------------|:---------------|
| <code>Total number of bones in an adult human body</code> | <code>How many bones are in the human body?</code> | <code>1</code> |
| <code>What is the largest river in North America?</code> | <code>What is the largest lake in North America?</code> | <code>0</code> |
| <code>What is the capital of Australia?</code> | <code>What is the capital of New Zealand?</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Evaluation Dataset
#### Unnamed Dataset
* Size: 243 evaluation samples
* Columns: <code>sentence2</code>, <code>sentence1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence2 | sentence1 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.09 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.55 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>0: ~55.56%</li><li>1: ~44.44%</li></ul> |
* Samples:
| sentence2 | sentence1 | label |
|:-------------------------------------------------------------|:---------------------------------------------------------------|:---------------|
| <code>What are the various forms of renewable energy?</code> | <code>What are the different types of renewable energy?</code> | <code>1</code> |
| <code>Gravity discoverer</code> | <code>Who discovered gravity?</code> | <code>1</code> |
| <code>Can you help me write this report?</code> | <code>Can you help me understand this report?</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 2
- `learning_rate`: 3e-06
- `weight_decay`: 0.01
- `num_train_epochs`: 15
- `lr_scheduler_type`: reduce_lr_on_plateau
- `warmup_ratio`: 0.1
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `learning_rate`: 3e-06
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 15
- `max_steps`: -1
- `lr_scheduler_type`: reduce_lr_on_plateau
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | pair-class-dev_max_ap | pair-class-test_max_ap |
|:-----------:|:-------:|:-------------:|:----------:|:---------------------:|:----------------------:|
| 0 | 0 | - | - | 0.6426 | - |
| 0.6452 | 10 | 4.7075 | - | - | - |
| 0.9677 | 15 | - | 3.1481 | 0.7843 | - |
| 1.2903 | 20 | 3.431 | - | - | - |
| 1.9355 | 30 | 3.4054 | - | - | - |
| 2.0 | 31 | - | 2.1820 | 0.8692 | - |
| 2.5806 | 40 | 2.2735 | - | - | - |
| 2.9677 | 46 | - | 1.8185 | 0.9078 | - |
| 3.2258 | 50 | 2.3159 | - | - | - |
| 3.8710 | 60 | 2.1466 | - | - | - |
| 4.0 | 62 | - | 1.5769 | 0.9252 | - |
| 4.5161 | 70 | 1.6873 | - | - | - |
| 4.9677 | 77 | - | 1.4342 | 0.9310 | - |
| 5.1613 | 80 | 1.5927 | - | - | - |
| 5.8065 | 90 | 1.4184 | - | - | - |
| 6.0 | 93 | - | 1.3544 | 0.9357 | - |
| 6.4516 | 100 | 1.333 | - | - | - |
| 6.9677 | 108 | - | 1.2630 | 0.9402 | - |
| 7.0968 | 110 | 1.089 | - | - | - |
| 7.7419 | 120 | 1.0947 | - | - | - |
| 8.0 | 124 | - | 1.2120 | 0.9444 | - |
| 8.3871 | 130 | 0.8118 | - | - | - |
| 8.9677 | 139 | - | 1.1641 | 0.9454 | - |
| 9.0323 | 140 | 1.0237 | - | - | - |
| 9.6774 | 150 | 0.8406 | - | - | - |
| 10.0 | 155 | - | 1.0481 | 0.9464 | - |
| 10.3226 | 160 | 0.7081 | - | - | - |
| 10.9677 | 170 | 0.7397 | 0.9324 | 0.9509 | - |
| 11.6129 | 180 | 0.5604 | - | - | - |
| 12.0 | 186 | - | 0.8386 | 0.9556 | - |
| 12.2581 | 190 | 0.5841 | - | - | - |
| 12.9032 | 200 | 0.5463 | - | - | - |
| 12.9677 | 201 | - | 0.7930 | 0.9577 | - |
| 13.5484 | 210 | 0.4599 | - | - | - |
| 14.0 | 217 | - | 0.7564 | 0.9599 | - |
| 14.1935 | 220 | 0.2437 | - | - | - |
| **14.5161** | **225** | **-** | **0.7522** | **0.9603** | **0.9603** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
BogdanTurbal/model_bert_large_cased_d_political_bias_gender_bias_ep_2_6_a_sqn_a_b_p_100_5_v_12 | BogdanTurbal | 2024-08-20T10:07:32Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:BogdanTurbal/model_bert_large_cased_d_political_bias_ep_2_sqn_a_p_100_v_12",
"base_model:finetune:BogdanTurbal/model_bert_large_cased_d_political_bias_ep_2_sqn_a_p_100_v_12",
"license:apache-2.0",
"region:us"
] | null | 2024-08-20T09:06:58Z | ---
license: apache-2.0
base_model: BogdanTurbal/model_bert_large_cased_d_political_bias_ep_2_sqn_a_p_100_v_12
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_bert_large_cased_d_political_bias_gender_bias_ep_2_6_a_sqn_a_b_p_100_5_v_12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_bert_large_cased_d_political_bias_gender_bias_ep_2_6_a_sqn_a_b_p_100_5_v_12
This model is a fine-tuned version of [BogdanTurbal/model_bert_large_cased_d_political_bias_ep_2_sqn_a_p_100_v_12](https://huggingface.co/BogdanTurbal/model_bert_large_cased_d_political_bias_ep_2_sqn_a_p_100_v_12) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2461
- Accuracy: 0.7851
- F1 Micro: 0.7851
- Auc: 0.8575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.5648 | 1.0 | 38 | 0.5449 | 0.7174 | 0.7174 | 0.8512 |
| 0.2551 | 2.0 | 76 | 0.5605 | 0.7784 | 0.7784 | 0.8616 |
| 0.3317 | 3.0 | 114 | 0.8203 | 0.7926 | 0.7926 | 0.8582 |
| 0.1221 | 4.0 | 152 | 1.2179 | 0.7843 | 0.7843 | 0.8356 |
| 0.0096 | 5.0 | 190 | 1.1558 | 0.7784 | 0.7784 | 0.8516 |
| 0.0013 | 6.0 | 228 | 1.2461 | 0.7851 | 0.7851 | 0.8575 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
BogdanTurbal/model_roberta_large_d_political_bias_political_bias_ep_2_6_a_sqn_a_b_p_100_5_v_12 | BogdanTurbal | 2024-08-20T09:59:14Z | 5 | 0 | null | [
"tensorboard",
"safetensors",
"roberta",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"region:us"
] | null | 2024-08-20T09:05:09Z | ---
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_roberta_large_d_political_bias_political_bias_ep_2_6_a_sqn_a_b_p_100_5_v_12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_roberta_large_d_political_bias_political_bias_ep_2_6_a_sqn_a_b_p_100_5_v_12
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6936
- Accuracy: 0.5026
- F1 Micro: 0.5026
- Auc: 0.4486
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.7239 | 1.0 | 37 | 0.7001 | 0.5026 | 0.5026 | 0.4269 |
| 0.7018 | 2.0 | 74 | 0.6965 | 0.5026 | 0.5026 | 0.4372 |
| 0.7098 | 3.0 | 111 | 0.6932 | 0.5026 | 0.5026 | 0.4599 |
| 0.7063 | 4.0 | 148 | 0.6931 | 0.5026 | 0.5026 | 0.4734 |
| 0.6831 | 5.0 | 185 | 0.6932 | 0.4974 | 0.4974 | 0.4479 |
| 0.6977 | 6.0 | 222 | 0.6936 | 0.5026 | 0.5026 | 0.4486 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mradermacher/128-goldfish-loss-llama-1B-GGUF | mradermacher | 2024-08-20T09:59:09Z | 17 | 0 | transformers | [
"transformers",
"gguf",
"goldfish-loss",
"memorization",
"mitigation",
"en",
"dataset:tomg-group-umd/wikipedia-en-2k-samples",
"base_model:tomg-group-umd/128-goldfish-loss-llama-1B",
"base_model:quantized:tomg-group-umd/128-goldfish-loss-llama-1B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-08-20T09:45:57Z | ---
base_model: tomg-group-umd/128-goldfish-loss-llama-1B
datasets:
- tomg-group-umd/wikipedia-en-2k-samples
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- goldfish-loss
- memorization
- mitigation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tomg-group-umd/128-goldfish-loss-llama-1B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.IQ3_XS.gguf) | IQ3_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.IQ3_M.gguf) | IQ3_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/128-goldfish-loss-llama-1B-GGUF/resolve/main/128-goldfish-loss-llama-1B.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
KoichiYasuoka/roberta-large-korean-upos | KoichiYasuoka | 2024-08-20T09:57:49Z | 11 | 4 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"korean",
"pos",
"dependency-parsing",
"ko",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/roberta-large-korean-hanja",
"base_model:finetune:KoichiYasuoka/roberta-large-korean-hanja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-29T08:16:31Z | ---
language:
- "ko"
tags:
- "korean"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/roberta-large-korean-hanja
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "ํ์ ๋ง์ด ๋์ ํ์๋ผ ์๊ฐํ๋ค."
- text: "็ด
ๆน ๋ง์ด ๋์ ็ด
ๆน๋ผ ็่ฆบํ๋ค."
---
# roberta-large-korean-upos
## Model Description
This is a RoBERTa model pre-trained on Korean texts for POS-tagging and dependency-parsing, derived from [roberta-large-korean-hanja](https://huggingface.co/KoichiYasuoka/roberta-large-korean-hanja). Every word (์ด์ ) is tagged by [UPOS](https://universaldependencies.org/u/pos/)(Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-korean-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-large-korean-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("ํ์ ๋ง์ด ๋์ ํ์๋ผ ์๊ฐํ๋ค."))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-large-korean-upos")
print(nlp("ํ์ ๋ง์ด ๋์ ํ์๋ผ ์๊ฐํ๋ค."))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF | mradermacher | 2024-08-20T09:55:00Z | 316 | 0 | transformers | [
"transformers",
"gguf",
"axolotl",
"generated_from_trainer",
"en",
"dataset:Magpie-Align/Magpie-Llama-3.1-Pro-500K-Filtered",
"dataset:Magpie-Align/Magpie-Reasoning-150K",
"base_model:Magpie-Align/Llama-3.1-8B-Magpie-Align-SFT-v0.2",
"base_model:quantized:Magpie-Align/Llama-3.1-8B-Magpie-Align-SFT-v0.2",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-20T08:39:28Z | ---
base_model: Magpie-Align/Llama-3.1-8B-Magpie-Align-SFT-v0.2
datasets:
- Magpie-Align/Magpie-Llama-3.1-Pro-500K-Filtered
- Magpie-Align/Magpie-Reasoning-150K
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- axolotl
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Magpie-Align/Llama-3.1-8B-Magpie-Align-SFT-v0.2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Magpie-Align-SFT-v0.2-i1-GGUF/resolve/main/Llama-3.1-8B-Magpie-Align-SFT-v0.2.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
KoichiYasuoka/roberta-base-korean-hanja | KoichiYasuoka | 2024-08-20T09:54:58Z | 107 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"korean",
"masked-lm",
"ko",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-01T03:29:39Z | ---
language:
- "ko"
tags:
- "korean"
- "masked-lm"
base_model: klue/roberta-base
license: "cc-by-sa-4.0"
pipeline_tag: "fill-mask"
mask_token: "[MASK]"
---
# roberta-base-korean-hanja
## Model Description
This is a RoBERTa model pre-trained on Korean texts, derived from [klue/roberta-base](https://huggingface.co/klue/roberta-base). Token-embeddings are enhanced to include all ํ๋ฌธ ๊ต์ก์ฉ ๊ธฐ์ด ํ์ and ์ธ๋ช
์ฉ ํ์ characters. You can fine-tune `roberta-base-korean-hanja` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-korean-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-korean-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-korean-hanja")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-korean-hanja")
```
|
KoichiYasuoka/roberta-base-korean-upos | KoichiYasuoka | 2024-08-20T09:53:32Z | 398 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"korean",
"pos",
"dependency-parsing",
"ko",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/roberta-base-korean-hanja",
"base_model:finetune:KoichiYasuoka/roberta-base-korean-hanja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-11-29T08:02:22Z | ---
language:
- "ko"
tags:
- "korean"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/roberta-base-korean-hanja
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "token-classification"
widget:
- text: "ํ์ ๋ง์ด ๋์ ํ์๋ผ ์๊ฐํ๋ค."
- text: "็ด
ๆน ๋ง์ด ๋์ ็ด
ๆน๋ผ ็่ฆบํ๋ค."
---
# roberta-base-korean-upos
## Model Description
This is a RoBERTa model pre-trained on Korean texts for POS-tagging and dependency-parsing, derived from [roberta-base-korean-hanja](https://huggingface.co/KoichiYasuoka/roberta-base-korean-hanja). Every word (์ด์ ) is tagged by [UPOS](https://universaldependencies.org/u/pos/)(Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification,TokenClassificationPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-korean-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/roberta-base-korean-upos")
pipeline=TokenClassificationPipeline(tokenizer=tokenizer,model=model,aggregation_strategy="simple")
nlp=lambda x:[(x[t["start"]:t["end"]],t["entity_group"]) for t in pipeline(x)]
print(nlp("ํ์ ๋ง์ด ๋์ ํ์๋ผ ์๊ฐํ๋ค."))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/roberta-base-korean-upos")
print(nlp("ํ์ ๋ง์ด ๋์ ํ์๋ผ ์๊ฐํ๋ค."))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
Subsets and Splits