modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-25 12:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-25 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/bge_large_medical-GGUF | mradermacher | 2025-02-26T00:59:56Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-02-26T00:58:16Z | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ls-da3m0ns/bge_large_medical
|
debjit20504/miRNA-biobert | debjit20504 | 2025-02-26T00:58:23Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"biobert",
"miRNA",
"biomedical",
"LoRA",
"fine-tuning",
"dataset:custom-biomedical-dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-17T16:08:57Z | ---
tags:
- text-classification
- transformers
- biobert
- miRNA
- biomedical
- LoRA
- fine-tuning
library_name: transformers
datasets:
- custom-biomedical-dataset
license: apache-2.0
---
# ๐งฌ miRNA-BioBERT: Fine-Tuned BioBERT for miRNA Sentence Classification
**Fine-tuned BioBERT model for classifying miRNA-related sentences in biomedical research papers.**
<!-- ๐ **Hugging Face Model Link**: [debjit20504/miRNA-biobert](https://huggingface.co/debjit20504/miRNA-biobert) -->
---
## ๐ Overview
**miRNA-BioBERT** is a fine-tuned version of [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1), trained specifically for **classifying sentences** as **miRNA-related (relevant) or not (irrelevant)**. The model is useful for **automating literature reviews**, **extracting relevant sentences**, and **identifying key insights** in genomic research.
โ **Base Model**: `dmis-lab/biobert-base-cased-v1.1`
โ **Fine-tuning Method**: **LoRA (Low-Rank Adaptation)**
โ **Dataset**: **Curated biomedical text corpus containing labeled miRNA-relevant and non-relevant sentences**
โ **Task**: **Binary classification (1 = functional, 0 = non-functional)**
โ **Trained on**: **RTX A6000 GPU (5 epochs, batch size 32, learning rate 2e-5)**
## ๐ How to Use the Model
### 1๏ธโฃ Install Dependencies
```bash
pip install transformers torch
```
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
# Load the model and tokenizer
model_name = "debjit20504/miRNA-biobert"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
# Move model to GPU or MPS (for Mac)
device = torch.device("mps") if torch.backends.mps.is_available() else torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.eval()
def classify_text(text):
inputs = tokenizer(text, return_tensors="pt").to(device)
with torch.no_grad():
output = model(**inputs)
label = torch.argmax(output.logits, dim=1).item()
return "functional" if label == 1 else "Non-functional"
# Example Test
sample_text = "The results showed that miR-223-3p decreased in glioblastoma tissue but NLRP3 increased."
print(f"Classification: {classify_text(sample_text)}")
```
## ๐ Training Details
- Dataset: Biomedical text dataset with 429,785 relevant sentences and 87,966 irrelevant sentences.
- Fine-Tuning Method: LoRA (Low-Rank Adaptation) for efficient training.
- Training Hardware: NVIDIA RTX A6000 GPU.
- Training Settings:
- Batch size: 32
- Learning rate: 2e-5
- Optimizer: AdamW
- Warmup steps: 1000
- Epochs: 5
- Mixed precision (fp16): โ
Enabled for efficiency.
---
## ๐ Model Applications
โ
**Biomedical NLP** โ Extracting meaningful information from biomedical literature.
โ
**miRNA Research** โ Identifying sentences discussing miRNA mechanisms.
โ
**Automated Literature Review** โ Filtering relevant studies efficiently.
โ
**Genomics & Bioinformatics** โ Enhancing data retrieval from scientific texts.
---
## ๐ฌ Contact
For any questions or collaborations, reach out via:
**๐ง Email**: [email protected]
**๐ LinkedIn**: https://www.linkedin.com/in/debjit-pramanik-88a837171/ |
1-Girl-15-Haands/wATCH.1.Girl.15.Hands.viral.video.original | 1-Girl-15-Haands | 2025-02-26T00:56:13Z | 0 | 0 | null | [
"region:us"
] | null | 2025-02-26T00:55:17Z | [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://lekedvideo.xyz/watch/)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)](https://lekedvideo.xyz/watch/)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://lekedvideo.xyz/watch/) |
1-Girl-15-Haands/FULL.1-Girl-15-Hands.Video.Viral.Video.On.Social.Media.X | 1-Girl-15-Haands | 2025-02-26T00:56:11Z | 0 | 0 | null | [
"region:us"
] | null | 2025-02-26T00:54:43Z | [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://lekedvideo.xyz/watch/)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)](https://lekedvideo.xyz/watch/)
[๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://lekedvideo.xyz/watch/) |
mradermacher/medical_transcription_generator-GGUF | mradermacher | 2025-02-26T00:56:04Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"medical",
"en",
"base_model:alibidaran/medical_transcription_generator",
"base_model:quantized:alibidaran/medical_transcription_generator",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T00:53:56Z | ---
base_model: alibidaran/medical_transcription_generator
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/alibidaran/medical_transcription_generator
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/medical_transcription_generator-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/medical_summarization-GGUF | mradermacher | 2025-02-26T00:53:09Z | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T00:52:36Z | <!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Falconsai/medical_summarization
|
TaoZewen/rl_course_vizdoom_health_gathering_supreme_V2 | TaoZewen | 2025-02-26T00:52:02Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-02-26T00:51:58Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.48 +/- 4.14
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r TaoZewen/rl_course_vizdoom_health_gathering_supreme_V2
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_V2
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_V2 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
mradermacher/Nostr-Llama-3.1-8B-GGUF | mradermacher | 2025-02-26T00:49:58Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:some1nostr/Nostr-Llama-3.1-8B",
"base_model:quantized:some1nostr/Nostr-Llama-3.1-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T23:58:38Z | ---
base_model: some1nostr/Nostr-Llama-3.1-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/some1nostr/Nostr-Llama-3.1-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Roybello/Roy-replicate | Roybello | 2025-02-26T00:48:37Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-25T18:56:29Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ROY
---
# Roy Replicate
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ROY` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Roybello/Roy-replicate', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Paladiso/d72ef992-2955-4289-aef8-fcc6be507dfb | Paladiso | 2025-02-26T00:48:14Z | 0 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-7b-it",
"base_model:adapter:unsloth/gemma-7b-it",
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T00:42:43Z | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-7b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d72ef992-2955-4289-aef8-fcc6be507dfb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-7b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2e4b4f09c9ae8b90_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2e4b4f09c9ae8b90_train_data.json
type:
field_input: content
field_instruction: instruction
field_output: new_contents
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Paladiso/d72ef992-2955-4289-aef8-fcc6be507dfb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/2e4b4f09c9ae8b90_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a3c5ad4e-0086-4c2f-b5d5-c05271f38d4e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a3c5ad4e-0086-4c2f-b5d5-c05271f38d4e
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d72ef992-2955-4289-aef8-fcc6be507dfb
This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3649 | 0.0004 | 1 | 0.3410 |
| 0.5629 | 0.0011 | 3 | 0.3297 |
| 0.1874 | 0.0023 | 6 | 0.2529 |
| 0.1635 | 0.0034 | 9 | 0.1654 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Kei5uke/llama3 | Kei5uke | 2025-02-26T00:47:36Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T00:37:42Z | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Kei5uke
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF | mradermacher | 2025-02-26T00:46:10Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"arxiv:2502.02384",
"en",
"base_model:thu-ml/STAIR-Llama-3.1-8B-SFT",
"base_model:quantized:thu-ml/STAIR-Llama-3.1-8B-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-25T22:59:19Z | ---
base_model: thu-ml/STAIR-Llama-3.1-8B-SFT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
- arxiv:2502.02384
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/thu-ml/STAIR-Llama-3.1-8B-SFT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
erax-ai/EraX-WoW-Turbo-VI-b256-lr5e-5-wd0.08-gradnorm0.8-cp8400 | erax-ai | 2025-02-26T00:45:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-02-26T00:42:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF | mradermacher | 2025-02-26T00:39:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"arxiv:2502.02384",
"en",
"base_model:thu-ml/STAIR-Llama-3.1-8B-SFT",
"base_model:quantized:thu-ml/STAIR-Llama-3.1-8B-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T19:33:31Z | ---
base_model: thu-ml/STAIR-Llama-3.1-8B-SFT
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- llama-factory
- full
- generated_from_trainer
- arxiv:2502.02384
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/thu-ml/STAIR-Llama-3.1-8B-SFT
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mrdamha/Rationalist_in_Islam_001 | mrdamha | 2025-02-26T00:39:06Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-02-25T07:37:05Z | ---
license: other
license_name: other
license_link: LICENSE
---
|
EVX-Tech/EVXSigmaChatBot | EVX-Tech | 2025-02-26T00:35:49Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-02-16T16:28:52Z | ---
license: mit
---
## This Uses DialogFlow so to use this bot import it into to google DialogFlow
|
gazimagomed/GazGPT | gazimagomed | 2025-02-26T00:34:06Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-02-26T00:34:06Z | ---
license: apache-2.0
---
|
qing-yao/long_first_headfinal_seed-42_1e-3 | qing-yao | 2025-02-26T00:33:55Z | 1 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-21T21:22:57Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: long_first_headfinal_seed-42_1e-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long_first_headfinal_seed-42_1e-3
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1160
- Accuracy: 0.2007
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 6.1724 | 0.9994 | 1470 | 5.5103 | 0.1759 |
| 4.5289 | 1.9992 | 2940 | 5.4000 | 0.1844 |
| 3.8901 | 2.9991 | 4410 | 5.3044 | 0.1895 |
| 3.7154 | 3.9996 | 5881 | 5.2299 | 0.1952 |
| 3.4885 | 4.9994 | 7351 | 5.1806 | 0.1983 |
| 3.4097 | 5.9992 | 8821 | 5.1625 | 0.1984 |
| 3.3049 | 6.9991 | 10291 | 5.1184 | 0.1994 |
| 3.2579 | 7.9996 | 11762 | 5.1354 | 0.2021 |
| 3.2058 | 8.9994 | 13232 | 5.1414 | 0.2010 |
| 3.1678 | 9.9992 | 14702 | 5.1105 | 0.2010 |
| 3.143 | 10.9991 | 16172 | 5.0866 | 0.1999 |
| 3.1069 | 11.9996 | 17643 | 5.1130 | 0.2012 |
| 3.1019 | 12.9994 | 19113 | 5.1315 | 0.2012 |
| 3.0681 | 13.9992 | 20583 | 5.1160 | 0.2007 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.20.0
|
haohsuan/N8N | haohsuan | 2025-02-26T00:33:21Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2025-02-26T00:32:41Z | ---
license: mit
---
pip install vllm
vllm serve "deepseek-ai/DeepSeek-R1"
|
iaminju/DeepSeek-R1-Distill-Qwen-1.5B-GRPO_sample_1k | iaminju | 2025-02-26T00:28:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T23:41:11Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
model_name: DeepSeek-R1-Distill-Qwen-1.5B-GRPO_sample_1k
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for DeepSeek-R1-Distill-Qwen-1.5B-GRPO_sample_1k
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="iaminju/DeepSeek-R1-Distill-Qwen-1.5B-GRPO_sample_1k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/minjuseo/huggingface/runs/7gqdvv36)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
thellumi/LLuMi_Think_70B | thellumi | 2025-02-26T00:28:13Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"deepseek",
"meta",
"qwen",
"en",
"tr",
"arxiv:2501.12948",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-23T21:50:09Z | ---
license: mit
language:
- en
- tr
pipeline_tag: text-generation
library_name: transformers
tags:
- conversational
- llama
- deepseek
- meta
- qwen
---
<p align="center">
<a href="https://thelucy.tech"><b>Powered by the Lucy</b></a>
</p>
## Model Information
The LLuMi multilingual large language model (LLM) is an instruction tuned generative model in 70B (text in/text out). LLuMi builds upon this robust foundation by incorporating additional refinements and distillation techniques inspired by the DeepSeek-R1 framework. This results in a model that not only retains the original strengths of Llama 3.3 but also delivers improved performance and efficiency for real-world applications. LLuMi exhibits sophisticated chain-of-thought behaviors, improved self-verification, and reduced issues such as repetition and language mixing.
<p align="center">
<a href="[email protected]"><a>[email protected]</a></a>
</p>
**Model Release Date:**
* **LLuMi Think LLM Family: February 24, 2025**
## 1. Introduction
We introduce LLuMi, a state-of-the-art multilingual large language model (LLM) built on the robust Llama 3.3 70B architecture. LLuMi is instruction tuned to excel in real-world applications, particularly in multilingual dialogue and complex reasoning tasks.
Leveraging advanced refinements and distillation techniques inspired by the DeepSeek-R1 framework, LLuMi not only retains the core strengths of its Llama 3.3 foundation but also delivers enhanced performance and efficiency. By integrating large-scale reinforcement learning directly on the base model, LLuMi exhibits sophisticated chain-of-thought behaviors, improved self-verification, and reduced issues such as repetition and language mixing.
To support the research community and foster further innovation, we are releasing the full LLuMi model along with a range of distilled checkpoints across various sizes. This initiative empowers researchers to deploy both the complete model and resource-efficient distilled versions for diverse applications.
NOTE: Before deploying LLuMi locally, please review the How to use & Usage Recommendations section for detailed guidelines and best practices.
**Distillation: Unlocking the Power of Smaller Models**
- We demonstrate that the advanced reasoning patterns of larger models can be distilled into smaller, more efficient models. This approach yields improved performance compared to the reasoning strategies derived solely via reinforcement learning on smaller models. The open source DeepSeek-R1 frameworkโand its APIโplay a crucial role in enabling the research community to distill and develop even more powerful smaller models in the future.
- Leveraging the rich reasoning data generated by DeepSeek-R1, we fine-tuned LLuMiโa dense, instruction-tuned model built upon the Llama 3.3 70B architecture. Our evaluation results show that the distilled LLuMi model performs exceptionally well on various benchmarks, often matching or even surpassing the performance of larger models.
- Furthermore, we are excited to open-source the full LLuMi model along with a series of distilled checkpoints across multiple sizesโincluding 3B, 8B, and 70Bโbased on the LLuMi framework. This initiative provides the research community with access to both the complete model and its distilled versions, enabling a wide range of applications with varying computational needs.
**Post-Training: Large-Scaling Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base LLuMi model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach enables LLuMi to explore advanced chain-of-thought (CoT) capabilities for tackling complex problems, leading to enhanced self-verification, reflective reasoning, and the generation of extended CoTs. Notably, LLuMi is among the first open research initiatives to demonstrate that the reasoning capabilities of large language models can be effectively incentivized purely through RL, without the need for an initial SFT phase. This breakthrough paves the way for future advancements in scalable reinforcement learning strategies for LLMs.
We introduce our comprehensive pipeline for developing LLuMi inspired from DeepSeek-R1, which includes:
- Two RL Stages: Designed to discover improved reasoning patterns and align the model with human preferences.
- Two SFT Stages: Serving as the foundational seed for both the modelโs reasoning and non-reasoning capabilities.
We believe this innovative pipeline will not only enhance LLuMi's performance but also benefit the industry by inspiring the creation of more robust and efficient models.
## 2. Model Distillation and GRPO-Based Thinking Enhancement
The LLuMi 70B model has been meticulously developed using the advanced techniques of DeepSeek-R1 Distill Llama 3.3 70B. By leveraging state-of-the-art distillation methods, LLuMi 70B not only retains the powerful multilingual and instruction-tuned capabilities of its foundation but also achieves enhanced performance and efficiency for diverse real-world applications.
Furthermore, inspired by the successes of DeepSeek-R1, we have infused our smaller LLuMi 8B and 3B models with a unique thinking property through the use of GRPO (Guided Reasoning Policy Optimization). This innovative approach endows the smaller models with sophisticated chain-of-thought reasoning and reflective problem-solving abilitiesโensuring that even with fewer parameters, they can deliver agile and context-aware responses.
Together, these advancements underscore our commitment to creating a versatile family of models that scale seamlessly from 3B to 70B, providing powerful solutions tailored to various computational and application needs.
## 3. Model Downloads
### LLuMi Think Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| LLuMi Think 3B | [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) | [๐ค HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_3B) |
| LLuMi Think 8B | [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | [๐ค HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_8B) |
| LLuMi Think 70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [๐ค HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_70B) |
</div>
## 4. How to use
This repository contains three versions of LLuMi Think LLM Models, for use with transformers and with bitsandbytes codebase.
- **Use with transformers**
Starting with `transformers >= 4.48.3` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "thellumi/LLuMi_Think_70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "user", "content": "Why are tomatoes red?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
- **Use `bitsandbytes`**
The model checkpoints can be used in `8-bit` and `4-bit` for further memory optimisations using `bitsandbytes` and `transformers`
See the snippet below for usage:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "thellumi/LLuMi_Think_70B"
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
quantized_model = AutoModelForCausalLM.from_pretrained(
model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
input_text = "Why are tomatoes red?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
output = quantized_model.generate(**input_ids, max_new_tokens=10)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
To load in 4-bit simply pass `load_in_4bit=True`
### 5. Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, DeepSeek have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance.
**To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
## 6. Training Data
**Overview:**
LLuMi is built upon the robust Llama 3.3 architecture, which was pretrained on approximately 15 trillion tokens sourced from publicly available datasets. For fine-tuning, LLuMi leverages a combination of publicly available instruction datasets and over 10 million examples sourced from Hugging Face. This comprehensive training corpus has been curated to ensure high performance across various languages, with dedicated support for Turkish and other languages.
**Data Freshness:**
The pretraining data includes content up to a cutoff date of Aug. 2024, ensuring that LLuMi is aligned with recent language trends and developments.
## 7. Benchmarks
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| OpenAI o1-1217 | 79.2 | - | 96.4 | 75.7 | 63.4 | 2061 |
| OpenAI o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | 1820 |
| OpenAI GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek R1 | 79.8 | - | 97.3 | 71.5 | 65.9 | 2209 |
| LLuMi Think 70B | 69.3 | 86.4 | 94.1 | 64.8 | 56.9 | 1603 |
**Note on Benchmark Results:** Due to hardware limitations, full-scale benchmark tests could not be performed, and the results may vary. We remain fully transparent about these constraints and are actively working towards securing the necessary resources to conduct comprehensive evaluations in the near future.
## 8. Responsibility & Safety
At LLuMi, we are committed to promoting responsible and ethical use of our technology. We recognize that large language models carry inherent risks and potential for misuse, and we have taken several measures to mitigate these challenges:
- **Bias Mitigation:** We have implemented various strategies during training to minimize biases in model outputs. However, users should be aware that, despite these efforts, occasional biases or unintended outputs may still occur.
- **Usage Guidelines:** LLuMi is designed for research and responsible deployment. We strongly encourage users to adhere to ethical guidelines, applicable laws, and best practices when using the model. Generating harmful, misleading, or offensive content is strictly prohibited.
- **Safety Measures:** Users deploying LLuMi in real-world applications should implement additional safety filters and monitoring mechanisms. We recommend regular audits and evaluations to ensure that the modelโs outputs remain within acceptable ethical boundaries.
- **Community Engagement:** We invite the community to provide feedback on any safety or ethical issues encountered during usage. This collaborative approach is vital for continuously refining the model and addressing potential risks.
- **Transparency and Accountability:** By open-sourcing LLuMi, we aim to foster transparency and accountability. We commit to ongoing research and updates focused on improving the model's safety and ethical performance.
By using LLuMi, you agree to follow these guidelines and contribute to a safer, more responsible AI ecosystem.
## 9. License
This code repository and the model weights are licensed under the [MIT License](https://choosealicense.com/licenses/mit/).
LLuMi Think series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- LLuMi Think 3B is derived from [Qwen-2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE).
- LLuMi Think 8B is derived from [Llama3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- LLuMi Think 70B is derived from [Llama3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 10. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
```
@misc{thellumi,
author = {The Lucy},
month = feb,
title = {{LLuMi Think}},
howpublished = {https://llumi.tech},
year = {2025}
}
```
## 11. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]). |
thellumi/LLuMi_Think_8B | thellumi | 2025-02-26T00:27:50Z | 4 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"deepseek",
"meta",
"qwen",
"en",
"tr",
"arxiv:2501.12948",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-24T21:33:25Z | ---
license: mit
language:
- en
- tr
pipeline_tag: text-generation
library_name: transformers
tags:
- conversational
- llama
- deepseek
- meta
- qwen
---
<p align="center">
<a href="https://thelucy.tech"><b>Powered by the Lucy</b></a>
</p>
## Model Information
The LLuMi multilingual large language model (LLM) is an instruction tuned generative model in 70B (text in/text out). LLuMi builds upon this robust foundation by incorporating additional refinements and distillation techniques inspired by the DeepSeek-R1 framework. This results in a model that not only retains the original strengths of Llama 3.3 but also delivers improved performance and efficiency for real-world applications. LLuMi exhibits sophisticated chain-of-thought behaviors, improved self-verification, and reduced issues such as repetition and language mixing.
<p align="center">
<a href="[email protected]"><a>[email protected]</a></a>
</p>
**Model Release Date:**
* **LLuMi Think LLM Family: February 24, 2025**
## 1. Introduction
We introduce LLuMi, a state-of-the-art multilingual large language model (LLM) built on the robust Llama 3.3 70B architecture. LLuMi is instruction tuned to excel in real-world applications, particularly in multilingual dialogue and complex reasoning tasks.
Leveraging advanced refinements and distillation techniques inspired by the DeepSeek-R1 framework, LLuMi not only retains the core strengths of its Llama 3.3 foundation but also delivers enhanced performance and efficiency. By integrating large-scale reinforcement learning directly on the base model, LLuMi exhibits sophisticated chain-of-thought behaviors, improved self-verification, and reduced issues such as repetition and language mixing.
To support the research community and foster further innovation, we are releasing the full LLuMi model along with a range of distilled checkpoints across various sizes. This initiative empowers researchers to deploy both the complete model and resource-efficient distilled versions for diverse applications.
NOTE: Before deploying LLuMi locally, please review the How to use & Usage Recommendations section for detailed guidelines and best practices.
**Distillation: Unlocking the Power of Smaller Models**
- We demonstrate that the advanced reasoning patterns of larger models can be distilled into smaller, more efficient models. This approach yields improved performance compared to the reasoning strategies derived solely via reinforcement learning on smaller models. The open source DeepSeek-R1 frameworkโand its APIโplay a crucial role in enabling the research community to distill and develop even more powerful smaller models in the future.
- Leveraging the rich reasoning data generated by DeepSeek-R1, we fine-tuned LLuMiโa dense, instruction-tuned model built upon the Llama 3.3 70B architecture. Our evaluation results show that the distilled LLuMi model performs exceptionally well on various benchmarks, often matching or even surpassing the performance of larger models.
- Furthermore, we are excited to open-source the full LLuMi model along with a series of distilled checkpoints across multiple sizesโincluding 3B, 8B, and 70Bโbased on the LLuMi framework. This initiative provides the research community with access to both the complete model and its distilled versions, enabling a wide range of applications with varying computational needs.
**Post-Training: Large-Scaling Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base LLuMi model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach enables LLuMi to explore advanced chain-of-thought (CoT) capabilities for tackling complex problems, leading to enhanced self-verification, reflective reasoning, and the generation of extended CoTs. Notably, LLuMi is among the first open research initiatives to demonstrate that the reasoning capabilities of large language models can be effectively incentivized purely through RL, without the need for an initial SFT phase. This breakthrough paves the way for future advancements in scalable reinforcement learning strategies for LLMs.
We introduce our comprehensive pipeline for developing LLuMi inspired from DeepSeek-R1, which includes:
- Two RL Stages: Designed to discover improved reasoning patterns and align the model with human preferences.
- Two SFT Stages: Serving as the foundational seed for both the modelโs reasoning and non-reasoning capabilities.
We believe this innovative pipeline will not only enhance LLuMi's performance but also benefit the industry by inspiring the creation of more robust and efficient models.
## 2. Model Distillation and GRPO-Based Thinking Enhancement
The LLuMi 70B model has been meticulously developed using the advanced techniques of DeepSeek-R1 Distill Llama 3.3 70B. By leveraging state-of-the-art distillation methods, LLuMi 70B not only retains the powerful multilingual and instruction-tuned capabilities of its foundation but also achieves enhanced performance and efficiency for diverse real-world applications.
Furthermore, inspired by the successes of DeepSeek-R1, we have infused our smaller LLuMi 8B and 3B models with a unique thinking property through the use of GRPO (Guided Reasoning Policy Optimization). This innovative approach endows the smaller models with sophisticated chain-of-thought reasoning and reflective problem-solving abilitiesโensuring that even with fewer parameters, they can deliver agile and context-aware responses.
Together, these advancements underscore our commitment to creating a versatile family of models that scale seamlessly from 3B to 70B, providing powerful solutions tailored to various computational and application needs.
## 3. Model Downloads
### LLuMi Think Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| LLuMi Think 3B | [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) | [๐ค HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_3B) |
| LLuMi Think 8B | [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | [๐ค HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_8B) |
| LLuMi Think 70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [๐ค HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_70B) |
</div>
## 4. How to use
This repository contains three versions of LLuMi Think LLM Models, for use with transformers and with bitsandbytes codebase.
- **Use with transformers**
Starting with `transformers >= 4.48.3` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "thellumi/LLuMi_Think_70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "user", "content": "Why are tomatoes red?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
- **Use `bitsandbytes`**
The model checkpoints can be used in `8-bit` and `4-bit` for further memory optimisations using `bitsandbytes` and `transformers`
See the snippet below for usage:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "thellumi/LLuMi_Think_70B"
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
quantized_model = AutoModelForCausalLM.from_pretrained(
model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
input_text = "Why are tomatoes red?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
output = quantized_model.generate(**input_ids, max_new_tokens=10)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
To load in 4-bit simply pass `load_in_4bit=True`
### 5. Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, DeepSeek have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance.
**To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
## 6. Training Data
**Overview:**
LLuMi is built upon the robust Llama 3.3 architecture, which was pretrained on approximately 15 trillion tokens sourced from publicly available datasets. For fine-tuning, LLuMi leverages a combination of publicly available instruction datasets and over 10 million examples sourced from Hugging Face. This comprehensive training corpus has been curated to ensure high performance across various languages, with dedicated support for Turkish and other languages.
**Data Freshness:**
The pretraining data includes content up to a cutoff date of Aug. 2024, ensuring that LLuMi is aligned with recent language trends and developments.
## 7. Benchmarks
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| OpenAI o1-1217 | 79.2 | - | 96.4 | 75.7 | 63.4 | 2061 |
| OpenAI o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | 1820 |
| OpenAI GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek R1 | 79.8 | - | 97.3 | 71.5 | 65.9 | 2209 |
| LLuMi Think 70B | 69.3 | 86.4 | 94.1 | 64.8 | 56.9 | 1603 |
**Note on Benchmark Results:** Due to hardware limitations, full-scale benchmark tests could not be performed, and the results may vary. We remain fully transparent about these constraints and are actively working towards securing the necessary resources to conduct comprehensive evaluations in the near future.
## 8. Responsibility & Safety
At LLuMi, we are committed to promoting responsible and ethical use of our technology. We recognize that large language models carry inherent risks and potential for misuse, and we have taken several measures to mitigate these challenges:
- **Bias Mitigation:** We have implemented various strategies during training to minimize biases in model outputs. However, users should be aware that, despite these efforts, occasional biases or unintended outputs may still occur.
- **Usage Guidelines:** LLuMi is designed for research and responsible deployment. We strongly encourage users to adhere to ethical guidelines, applicable laws, and best practices when using the model. Generating harmful, misleading, or offensive content is strictly prohibited.
- **Safety Measures:** Users deploying LLuMi in real-world applications should implement additional safety filters and monitoring mechanisms. We recommend regular audits and evaluations to ensure that the modelโs outputs remain within acceptable ethical boundaries.
- **Community Engagement:** We invite the community to provide feedback on any safety or ethical issues encountered during usage. This collaborative approach is vital for continuously refining the model and addressing potential risks.
- **Transparency and Accountability:** By open-sourcing LLuMi, we aim to foster transparency and accountability. We commit to ongoing research and updates focused on improving the model's safety and ethical performance.
By using LLuMi, you agree to follow these guidelines and contribute to a safer, more responsible AI ecosystem.
## 9. License
This code repository and the model weights are licensed under the [MIT License](https://choosealicense.com/licenses/mit/).
LLuMi Think series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- LLuMi Think 3B is derived from [Qwen-2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE).
- LLuMi Think 8B is derived from [Llama3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- LLuMi Think 70B is derived from [Llama3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 10. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
```
@misc{thellumi,
author = {The Lucy},
month = feb,
title = {{LLuMi Think}},
howpublished = {https://llumi.tech},
year = {2025}
}
```
## 11. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]). |
nomnoos37/250216-Mistral-Nemo-ggls-v1.3.6-0.5-1-epoch | nomnoos37 | 2025-02-26T00:27:31Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"base_model:quantized:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T00:02:30Z | ---
base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nomnoos37
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thellumi/LLuMi_Think_3B | thellumi | 2025-02-26T00:27:13Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"llama",
"deepseek",
"meta",
"qwen",
"en",
"tr",
"arxiv:2501.12948",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-24T19:59:05Z | ---
license: mit
language:
- en
- tr
pipeline_tag: text-generation
library_name: transformers
tags:
- conversational
- llama
- deepseek
- meta
- qwen
---
<p align="center">
<a href="https://thelucy.tech"><b>Powered by the Lucy</b></a>
</p>
## Model Information
The LLuMi multilingual large language model (LLM) is an instruction tuned generative model in 70B (text in/text out). LLuMi builds upon this robust foundation by incorporating additional refinements and distillation techniques inspired by the DeepSeek-R1 framework. This results in a model that not only retains the original strengths of Llama 3.3 but also delivers improved performance and efficiency for real-world applications. LLuMi exhibits sophisticated chain-of-thought behaviors, improved self-verification, and reduced issues such as repetition and language mixing.
<p align="center">
<a href="[email protected]"><a>[email protected]</a></a>
</p>
**Model Release Date:**
* **LLuMi Think LLM Family: February 24, 2025**
## 1. Introduction
We introduce LLuMi, a state-of-the-art multilingual large language model (LLM) built on the robust Llama 3.3 70B architecture. LLuMi is instruction tuned to excel in real-world applications, particularly in multilingual dialogue and complex reasoning tasks.
Leveraging advanced refinements and distillation techniques inspired by the DeepSeek-R1 framework, LLuMi not only retains the core strengths of its Llama 3.3 foundation but also delivers enhanced performance and efficiency. By integrating large-scale reinforcement learning directly on the base model, LLuMi exhibits sophisticated chain-of-thought behaviors, improved self-verification, and reduced issues such as repetition and language mixing.
To support the research community and foster further innovation, we are releasing the full LLuMi model along with a range of distilled checkpoints across various sizes. This initiative empowers researchers to deploy both the complete model and resource-efficient distilled versions for diverse applications.
NOTE: Before deploying LLuMi locally, please review the How to use & Usage Recommendations section for detailed guidelines and best practices.
**Distillation: Unlocking the Power of Smaller Models**
- We demonstrate that the advanced reasoning patterns of larger models can be distilled into smaller, more efficient models. This approach yields improved performance compared to the reasoning strategies derived solely via reinforcement learning on smaller models. The open source DeepSeek-R1 frameworkโand its APIโplay a crucial role in enabling the research community to distill and develop even more powerful smaller models in the future.
- Leveraging the rich reasoning data generated by DeepSeek-R1, we fine-tuned LLuMiโa dense, instruction-tuned model built upon the Llama 3.3 70B architecture. Our evaluation results show that the distilled LLuMi model performs exceptionally well on various benchmarks, often matching or even surpassing the performance of larger models.
- Furthermore, we are excited to open-source the full LLuMi model along with a series of distilled checkpoints across multiple sizesโincluding 3B, 8B, and 70Bโbased on the LLuMi framework. This initiative provides the research community with access to both the complete model and its distilled versions, enabling a wide range of applications with varying computational needs.
**Post-Training: Large-Scaling Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base LLuMi model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach enables LLuMi to explore advanced chain-of-thought (CoT) capabilities for tackling complex problems, leading to enhanced self-verification, reflective reasoning, and the generation of extended CoTs. Notably, LLuMi is among the first open research initiatives to demonstrate that the reasoning capabilities of large language models can be effectively incentivized purely through RL, without the need for an initial SFT phase. This breakthrough paves the way for future advancements in scalable reinforcement learning strategies for LLMs.
We introduce our comprehensive pipeline for developing LLuMi inspired from DeepSeek-R1, which includes:
- Two RL Stages: Designed to discover improved reasoning patterns and align the model with human preferences.
- Two SFT Stages: Serving as the foundational seed for both the modelโs reasoning and non-reasoning capabilities.
We believe this innovative pipeline will not only enhance LLuMi's performance but also benefit the industry by inspiring the creation of more robust and efficient models.
## 2. Model Distillation and GRPO-Based Thinking Enhancement
The LLuMi 70B model has been meticulously developed using the advanced techniques of DeepSeek-R1 Distill Llama 3.3 70B. By leveraging state-of-the-art distillation methods, LLuMi 70B not only retains the powerful multilingual and instruction-tuned capabilities of its foundation but also achieves enhanced performance and efficiency for diverse real-world applications.
Furthermore, inspired by the successes of DeepSeek-R1, we have infused our smaller LLuMi 8B and 3B models with a unique thinking property through the use of GRPO (Guided Reasoning Policy Optimization). This innovative approach endows the smaller models with sophisticated chain-of-thought reasoning and reflective problem-solving abilitiesโensuring that even with fewer parameters, they can deliver agile and context-aware responses.
Together, these advancements underscore our commitment to creating a versatile family of models that scale seamlessly from 3B to 70B, providing powerful solutions tailored to various computational and application needs.
## 3. Model Downloads
### LLuMi Think Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| LLuMi Think 3B | [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) | [๐ค HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_3B) |
| LLuMi Think 8B | [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | [๐ค HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_8B) |
| LLuMi Think 70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [๐ค HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_70B) |
</div>
## 4. How to use
This repository contains three versions of LLuMi Think LLM Models, for use with transformers and with bitsandbytes codebase.
- **Use with transformers**
Starting with `transformers >= 4.48.3` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
Make sure to update your transformers installation via `pip install --upgrade transformers`.
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "thellumi/LLuMi_Think_70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "user", "content": "Why are tomatoes red?"},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
- **Use `bitsandbytes`**
The model checkpoints can be used in `8-bit` and `4-bit` for further memory optimisations using `bitsandbytes` and `transformers`
See the snippet below for usage:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "thellumi/LLuMi_Think_70B"
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
quantized_model = AutoModelForCausalLM.from_pretrained(
model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
input_text = "Why are tomatoes red?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
output = quantized_model.generate(**input_ids, max_new_tokens=10)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
To load in 4-bit simply pass `load_in_4bit=True`
### 5. Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, DeepSeek have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance.
**To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
## 6. Training Data
**Overview:**
LLuMi is built upon the robust Llama 3.3 architecture, which was pretrained on approximately 15 trillion tokens sourced from publicly available datasets. For fine-tuning, LLuMi leverages a combination of publicly available instruction datasets and over 10 million examples sourced from Hugging Face. This comprehensive training corpus has been curated to ensure high performance across various languages, with dedicated support for Turkish and other languages.
**Data Freshness:**
The pretraining data includes content up to a cutoff date of Aug. 2024, ensuring that LLuMi is aligned with recent language trends and developments.
## 7. Benchmarks
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| OpenAI o1-1217 | 79.2 | - | 96.4 | 75.7 | 63.4 | 2061 |
| OpenAI o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | 1820 |
| OpenAI GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek R1 | 79.8 | - | 97.3 | 71.5 | 65.9 | 2209 |
| LLuMi Think 70B | 69.3 | 86.4 | 94.1 | 64.8 | 56.9 | 1603 |
**Note on Benchmark Results:** Due to hardware limitations, full-scale benchmark tests could not be performed, and the results may vary. We remain fully transparent about these constraints and are actively working towards securing the necessary resources to conduct comprehensive evaluations in the near future.
## 8. Responsibility & Safety
At LLuMi, we are committed to promoting responsible and ethical use of our technology. We recognize that large language models carry inherent risks and potential for misuse, and we have taken several measures to mitigate these challenges:
- **Bias Mitigation:** We have implemented various strategies during training to minimize biases in model outputs. However, users should be aware that, despite these efforts, occasional biases or unintended outputs may still occur.
- **Usage Guidelines:** LLuMi is designed for research and responsible deployment. We strongly encourage users to adhere to ethical guidelines, applicable laws, and best practices when using the model. Generating harmful, misleading, or offensive content is strictly prohibited.
- **Safety Measures:** Users deploying LLuMi in real-world applications should implement additional safety filters and monitoring mechanisms. We recommend regular audits and evaluations to ensure that the modelโs outputs remain within acceptable ethical boundaries.
- **Community Engagement:** We invite the community to provide feedback on any safety or ethical issues encountered during usage. This collaborative approach is vital for continuously refining the model and addressing potential risks.
- **Transparency and Accountability:** By open-sourcing LLuMi, we aim to foster transparency and accountability. We commit to ongoing research and updates focused on improving the model's safety and ethical performance.
By using LLuMi, you agree to follow these guidelines and contribute to a safer, more responsible AI ecosystem.
## 9. License
This code repository and the model weights are licensed under the [MIT License](https://choosealicense.com/licenses/mit/).
LLuMi Think series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- LLuMi Think 3B is derived from [Qwen-2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE).
- LLuMi Think 8B is derived from [Llama3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- LLuMi Think 70B is derived from [Llama3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 10. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
```
@misc{thellumi,
author = {The Lucy},
month = feb,
title = {{LLuMi Think}},
howpublished = {https://llumi.tech},
year = {2025}
}
```
## 11. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]). |
bowilleatyou/c71c1479-b8d6-4202-854f-fd8c4ed1b600 | bowilleatyou | 2025-02-26T00:26:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T21:52:02Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tttx/model-250-force-022525 | tttx | 2025-02-26T00:26:30Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:tttx/250-force-022525",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
"license:mit",
"region:us"
] | null | 2025-02-26T00:09:54Z | ---
library_name: peft
license: mit
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
datasets:
- tttx/250-force-022525
model-index:
- name: model-250-force-022525
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model-250-force-022525
This model is a fine-tuned version of [tttx/sft-32b-020925-19k-5ep](https://huggingface.co/tttx/sft-32b-020925-19k-5ep) on the tttx/250-force-022525 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 100
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.47.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3 |
markldn/b1-Q4_K_M-GGUF | markldn | 2025-02-26T00:23:19Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:straykittycat/b1",
"base_model:quantized:straykittycat/b1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-26T00:22:57Z | ---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: straykittycat/b1
---
# markldn/b1-Q4_K_M-GGUF
This model was converted to GGUF format from [`straykittycat/b1`](https://huggingface.co/straykittycat/b1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/straykittycat/b1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo markldn/b1-Q4_K_M-GGUF --hf-file b1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo markldn/b1-Q4_K_M-GGUF --hf-file b1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo markldn/b1-Q4_K_M-GGUF --hf-file b1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo markldn/b1-Q4_K_M-GGUF --hf-file b1-q4_k_m.gguf -c 2048
```
|
apitchai/Llama-3.2-3B-Instruct-F1-NLQ-CoT-5-Epochs-Finetuned-16bit | apitchai | 2025-02-26T00:22:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-26T00:22:31Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** apitchai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
texanrangee/e221c6c4-e718-44d1-9176-bccd9d7d777a | texanrangee | 2025-02-26T00:20:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T22:03:33Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Exurbia-Delta9-i1-GGUF | mradermacher | 2025-02-26T00:18:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ClaudioItaly/Exurbia-Delta9",
"base_model:quantized:ClaudioItaly/Exurbia-Delta9",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-25T22:52:21Z | ---
base_model: ClaudioItaly/Exurbia-Delta9
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ClaudioItaly/Exurbia-Delta9
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Exurbia-Delta9-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q4_1.gguf) | i1-Q4_1 | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/AtmaLLaMA-GGUF | mradermacher | 2025-02-26T00:18:47Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:RakshitAi/AtmaLLaMA",
"base_model:quantized:RakshitAi/AtmaLLaMA",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T19:07:30Z | ---
base_model: RakshitAi/AtmaLLaMA
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/RakshitAi/AtmaLLaMA
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/AtmaLLaMA-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Blazgo/temp-model-for-2-mini-004 | Blazgo | 2025-02-26T00:17:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:CultriX/Qwen2.5-14B-ReasoningMerge",
"base_model:merge:CultriX/Qwen2.5-14B-ReasoningMerge",
"base_model:arcee-ai/Virtuoso-Small-v2",
"base_model:merge:arcee-ai/Virtuoso-Small-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-26T00:11:25Z | ---
base_model:
- CultriX/Qwen2.5-14B-ReasoningMerge
- arcee-ai/Virtuoso-Small-v2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [arcee-ai/Virtuoso-Small-v2](https://huggingface.co/arcee-ai/Virtuoso-Small-v2) as a base.
### Models Merged
The following models were included in the merge:
* [CultriX/Qwen2.5-14B-ReasoningMerge](https://huggingface.co/CultriX/Qwen2.5-14B-ReasoningMerge)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: arcee-ai/Virtuoso-Small-v2
parameters:
density: 0.5
weight: 0.5
- model: CultriX/Qwen2.5-14B-ReasoningMerge
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: arcee-ai/Virtuoso-Small-v2
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
Maxymin/distilbert-base-uncased-finetuned-squad | Maxymin | 2025-02-26T00:14:53Z | 1 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2025-02-23T08:59:05Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2610
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1697 | 1.0 | 5533 | 1.1382 |
| 0.8147 | 2.0 | 11066 | 1.1588 |
| 0.6341 | 3.0 | 16599 | 1.2610 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
byKim93/klue-roberta-base-klue-sts-mrc-2 | byKim93 | 2025-02-26T00:12:19Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:17552",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:byKim93/klue-roberta-base-klue-sts-2",
"base_model:finetune:byKim93/klue-roberta-base-klue-sts-2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-02-26T00:12:06Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:17552
- loss:MultipleNegativesRankingLoss
base_model: byKim93/klue-roberta-base-klue-sts-2
widget:
- source_sentence: ๋ฏธ๊ตญ์์ ๋ ๋ฒ์งธ๋ก ๋ง์ ์ ํ์ ๊ตญ์ ์?
sentences:
- ๋ฐ๊ทผํ ๋ํต๋ น์ด 17์ผ ํฌํญ์ ์ฒ ์ ๋ด ํ์ด๋ฅ์ค 3๊ณต์ฅ์ ์ฐพ์ ๊ฒ์ ์ธ๊ณ ์ ์ฒ ๊ธฐ์ ์ ์ ๋ํ๋ ํต์ฌ ์ฌ์
์ด๋ผ๋ ์ ์ ํ๊ฐํ ๊ฒ์ด๋ผ๊ณ ํฌ์ค์ฝ ์ธก์
์ค๋ช
ํ๋ค.ํ์ด๋ฅ์ค 3๊ณต์ฅ์ ์ง๋ 1์ ๊ฐ๋์ ์์ํ๋ค. ํ๋ฃจ 5700, ์ฐ 200๋ง์ ์ณ๋ฌผ์ ๋ฝ์๋ด๊ณ ์๋ค. ํฌ์ค์ฝ ๊ด๊ณ์๋ โ์ด๊ณณ์์ ์์ฐํ
์ณ๋ฌผ์ ๋ชจ๋ ์ ๊ฐ๊ณต์ฅ์์ ์ฌ์ฉ๋๋คโ๋ฉฐ โ๊ธฐ์กด์ ๊ณ ๋ก์์ ๋์จ ์ณ๋ฌผ๊ณผ ํ์ง์ ์ ํ ์ฐจ์ด๊ฐ ์๋คโ๊ณ ์ค๋ช
ํ๋ค. ํฌ์ค์ฝ๋ 1992๋
ํ์ด๋ฅ์ค ๊ณต๋ฒ
๊ธฐ์ ๊ฐ๋ฐ์ ์ฐฉ์ํด 11๋
๋ง์ธ 2003๋
์ฐ 60๋ง ๊ท๋ชจ์ 1๊ณต์ฅ ๊ฐ๋์ ์์ํ๋ค.ํฌ์ค์ฝ ๊ด๊ณ์๋ โ๋ค๋ฅธ ์ฒ ๊ฐ์
์ฒด๋ค๋ ํ์ด๋ฅ์ค์ ๊ฐ์ ์ฉ์ ๊ธฐ์
๊ฐ๋ฐ์ ๋์ฐ์ง๋ง ๋ชจ๋ ์คํจํ๋คโ๋ฉฐ โ์ด์ ํด์ธ ์
์ฒด๋ค๋ก๋ถํฐ ๊ธฐ์ ์์ถ ์์ฒญ์ด ์ด์ด์ง๊ณ ์๋คโ๊ณ ์ค๋ช
ํ๋ค. ์ค์ ๋ก 3๊ณต์ฅ ๊ฐ๋์ผ๋ก ์ ํด์ค๋น๊ฐ ๋
1๊ณต์ฅ ์ค๋น๋ ์ธ๋์ ๋ฉ์ค์ฝ์คํธ์ด ๊ด์ฌ์ ๋ณด์ฌ ์ง๋ 8์ ์ค๋น ๋งค๊ฐ์ ๊ดํ ์ํด๊ฐ์(MOU)๋ฅผ ์ฒด๊ฒฐํ๋ค. ์ค๊ตญ ์ถฉ์นญ๊ฐ์ฒ ๊ณผ ํจ๊ป ์ถ์ง ์ค์ธ ์ฐ์ฐ
300๋งt ๊ท๋ชจ์ ์ถฉ์นญ ํ์ด๋ฅ์ค ๊ณต์ฅ๋ ๋ด๋
์ค ์ฒซ ์ฝ์ ๋ฐ ์์ ์ด๋ค.ํฌ์ค์ฝ๋ ํ์ด๋ฅ์ค ๊ณต๋ฒ์ด ๊ธฐ์กด ๊ณ ๋ก ๋ฐฉ์๋ณด๋ค ์์ฐ๋น์ฉ์ด ์ ๋ ดํ๊ณ ํ๊ฒฝ์นํ์ ์ธ
๋งํผ ํด์ธ ์์ถ์ด ํ๋๋ ๊ฒ์ผ๋ก ๊ธฐ๋ํ๊ณ ์๋ค. ํ์ด๋ฅ์ค๋ ๊ณ ๋ก ๋ฐฉ์์ ๋นํด ํฉ์ฐํ๋ฌผ(SOx)๊ณผ ์ง์์ฐํ๋ฌผ(NOx) ๋ฐฐ์ถ๋์ด ๊ฐ๊ฐ 60%,
85% ์ ๋ ์ ๋ค. ํ์ฌ ๊ด๊ณ์๋ โ๊ณต์ฅ ์ค๋น์ 85%๋ฅผ ๊ตญ๋ด 37๊ฐ ์ค์๊ธฐ์
์์ ์ ์ํ๊ธฐ ๋๋ฌธ์ ํด์ธ์ ์์ถํ๋ฉด ์ค์๊ธฐ์
๋๋ฐ์ฑ์ฅ ํจ๊ณผ๋ฅผ
๊ธฐ๋ํ ์ ์๋คโ๊ณ ๊ฐ์กฐํ๋ค.
- ํ๊ตญ ์ฌ๋๋ค์ ์ข
์ข
์ ๊ทน๋จ์ ์ค๊ฐ๋ค. ๊ตํ์ ๋๊ฐ๋ฉด์ ์ ์ ๋ณด๋๊ฐ ํ๋ฉด, ์ ์ ๋ค๋๋ฉด์ ์ ํ์๋ฅผ ๋ ๋๊ณ ๋ฏผ๊ฐ์ ์์ ์งํจ๋ค. ์ฐ์ ์ ์น์ฑ์
๋๋ฆฌ๋ฉด์ ์ ๊ต์ ์ธ ์ ์ฌ๋ฅผ ์ง๋ด๊ธฐ๋ ํ๋ค. ํ์ฅ์๋ ๋จ๋ฐฉ๋ฌธํ์ ์์ง์ธ ๋์ฒญ๋ง๋ฃจ์ ๋ถ๋ฐฉ์์ ์ ๋ํ ์จ๋์ ํจ๊ป ๋ง๋ค์๋ค. ํ(ๆจ)์ผ๋ก ์ง ์์ด๋ฆฌ๋ฅผ
ํฅ(่)์ผ๋ก ํ์ด๋ธ๋ค. ใ๊ทน๋จ์ ํ๊ตญ์ธ, ๊ทน๋จ์ ์ฐฝ์กฐ์ฑใ์ โ๊ทน๋จโ์ด๋ ์ด์ณ๋ง๋ก ํ๊ตญ์ธ์ ๊ธฐ์ง์ ๋ถ์ํ ์ฑ
์ด๋ค. ์ ์๋ ๊ทน๋จ์ ํฌ์ฉํ๋ ํ๊ตญ์ธ์
ํน์ง์ ๋ค ๊ฐ์ง๋ก ๋ถ๋ฅํ๋ค. ํ๊ตญ์ธ์ ๊ทน๋จ๊ณผ ๊ทน๋จ์ ์์ฉํ๊ณ , ๊ทน๋จ์ ๋๋๋ค๊ณ , ๊ทน๋จ์ ์ค๊ฐ์ง๋๋ฅผ ๋ง๋ค์ด ์ถฉ๋์ ํผํ๊ณ , ๋ถ๋ถ์ ๊นจ๋ถ์์ด์
๋ ํฐ ํตํฉ์ ๋ง๋ค์ด๋ธ๋ค๋ ๊ฒ์ด๋ค. ์ ์๋ โํ๊ตญ์ธ์ ์๋ก ๋์ฒ์ ์ ์๋ ๊ฒ๋ค์ ๋์ด์๊ณ , ๋์๊ฐ ์ฌ๋ฌ ๊ฐ์ง๋ฅผ ์ฉ๊ด๋ก์ ๋ฃ๊ณ ์ต๋ณตํฉํด์ ์๋ก์ด
๊ฒ์ ๋ฝ์๋ธ๋คโ๋ฉฐ โ์ด๊ฒ์ด ํ๋ฏผ์กฑ์ด ๋ฐ์ ํ ์๋ฐ์ ์๋ ์ด์ โ๋ผ๊ณ ๋งํ๋ค.ํ๊ตญ์ธ์ โ๋นจ๋ฆฌ๋นจ๋ฆฌโ๋ฅผ โ์๊ทผ๊ณผ ๋๊ธฐโ ์๊ฒ ํ๋ ๋ฏผ์กฑ์ด๋ค. ์ ์๋
โ์ด๋ ๋ฏผ์กฑ์ด ๋นจ๋ฆฌ๋นจ๋ฆฌ ํ๋ฉด์ ์์ฑ๋๋ฅผ ๋์ผ ์ ์๋๋โ๋ฉฐ โ์ต์ฒ์ค๋ฝ๊ฒ ๋๊ณ ์ต์ฒ์ค๋ฝ๊ฒ ์ผํ๋ ์ฌ๋๋ค์ด ํ๊ตญ์ธโ์ด๋ผ๊ณ ๋งํ๋ค. ๋์์ โ์กฐ์ ์๋
๊ถ์์๋ 500๋
์ ํ๋ฃจ๋ ๋น ์ง์์ด ์์ ์ผ๊ฑฐ์์ผํฌ์กฑ์ ๊ธฐ๋กํ๊ณ ๋ฐฑ์ฑ๋ค์ ๋งค์ผ ๋
ผ์ผ๋ก ๋๊ฐ ๋์ฌ์ง๋ ๊ณ ์ญ์ ๊ฐ๋นํ๋คโ๋ฉฐ โํ๊ตญ์ธ์ ํ๋๋ฅผ ์์ํ๋ฉด
์ง์น์ง ์๊ณ ์ค๋ ๊ธฐ๊ฐ ์ง์ํ๋ ๋๊ธฐ๊ฐ ์๋ ์ฌ๋๋คโ์ด๋ผ๊ณ ๋ถ์ํ๋ค. ์๋ก ์์ถฉ๋ผ ๋ณด์ด๋ ๋ ๊ฐ์ง ๊ธฐ์ง์ด ๊ณต์กดํ๋ ๊ฒ์ด๋ค.์ ์๋ ์ฐ๋ฆฌ๋ง์๋
๊ทน๋จ์ ํฌ์ฉํ๋ ๋ฌธํ๊ฐ ๋ฐ์๋๋ค๊ณ ๋ณธ๋ค. ๋๋ค์ด, ๋นผ๋ซ์ด, ์ฌ๋ซ์ด ๋ฑ ๋ฐ๋๋๋ ์์๋ฅผ ํ๋๋ก ๋ฌถ์ ๋จ์ด๊ฐ ์์์ด ๋ง๋ค๋ ๊ฒ ์ ์์ ์ค๋ช
์ด๋ค.
ํ๊ตญ์ ์์ ๋ฌธํ๋ ์ ๊ทน๋จ์ ๋๋๋ ๋ค. ์ ์ฐฉ์ ์ฐ๋ฌผ์ธ ๋ฐํจ์ํ์ด ์ ๋ํ ๋ฐ๋ฌํ ํํธ ๊ฒ์ ์ด, ์์ถ์ ๊ฐ์ ์์ฐ ์ํ์ ์์์ ๊ทธ๋๋ก ์ฆ๊ธฐ๊ธฐ๋
ํ๋ค. ์ค๋ ๋์ด๋ ๋๋ฐฐ๊ธฐ์ ํ์๊ฐ์ ํ๋ฅด๋ฅด ๋์ด์ค๋ฅด๋ ์์๋๋น๋ฅผ ๋ชจ๋ ์ ์ฉํ๋ค.ํ๊ตญ์ธ์ ์ฐฝ์กฐ ์ ์ ์๋ ๋๋ก ์ ๊ทน์ฑ์ผ๋ก ํ์ถ๋๋ค. ํด์ธ์ ๋๊ฐ๋ณด๋ฉด
์ด๋ ๊ฐ๋ ํ ๋ฒ์ ํ๊ตญ ์ฌ๋์ ๋ง์ฃผ์น ๋งํผ ํ๊ตญ์ธ๋ค์ ์ธ๊ณ ๊ณณ๊ณณ์ ํผ์ ธ ์๋ค. ๋ฏธ๊ตญ ๋ด ์ ํ์ ์๋ ์ค๊ตญ ์ธ๋์ ์ด์ด ์ธ ๋ฒ์งธ๋ก ๋ง๋ค. ์ด๋
์ด๋ฟ์ผ๊น. ์ ๋์ธ์ ์ธ๊ณ 60์ฌ ๊ฐ๊ตญ์ ํฉ์ด์ ธ ์ด๊ณ , ์ค๊ตญ์ธ์ 100์ฌ ๊ฐ๊ตญ์์ ์ด๋ฏผ์๋ก ์ด๊ณ ์๋๋ฐ ์ธ๊ตฌ๊ฐ ๊ณ ์ 5000๋ง๋ช
์ ๋ถ๊ณผํ ํ๊ตญ
์ฌ๋๋ค์ 175๊ฐ๊ตญ์ ์ถ์ ํฐ์ ์ ์ก์๋ค. ์ ์๋ โ์๋ก์ด ๊ฒ์ ๋ํ ํธ๊ธฐ์ฌ๊ณผ ๊ตฝํ ์ค ๋ชจ๋ฅด๋ ๋์ ์ ์ ์ด ๋ง๋ค์ด๋ธ ๊ฒฐ๊ณผโ๋ผ๋ฉฐ โ๊ฐ์ง ๊ฒ์ด๋ผ๊ณ ๋
๋งจ๋ชธ๋ฟ์ธ ์ฌ๋๋ค์ด ๊ทผ๋ฉด๊ณผ ์ฑ์ค๋ก ์ธ๊ณ ๊ณณ๊ณณ์ ํ๊ณ ๋ค๊ณ ์๋คโ๊ณ ๋งํ๋ค.
- ๋ณดํ์ํ๋ ๊ธฐํํฐ์ฝ ์ฃผ๊ณ ๋ฐ๋๋ค๋ฉด โฆ.์ต์งํ ํ๋๋ผ์ดํ์๋ช
๋ํ. ๋ํ๋งํธ์์ ๋ณดํ์ํ์ ํ๋งคํ๊ณ ์ต๊ทผ์๋ ์ํ๊ธฐ์์๋ ๋ณดํ์ํ์ ํ๊ธฐ ์์ํด
์ฃผ๋ชฉ์ ๋ฐ๊ณ ์๋๋ฐ. 20, 30๋๋ฅผ ๊ฒจ๋ฅํด ๋ณดํ์ํ์ ์ ๋ฌผํ๋ ๋ฐฉ์๋ ๊ตฌ์ํ๊ณ ์๋ค๊ณ . ํด๋ํฐ์ผ๋ก ๊ธฐํํฐ์ฝ์ ์ฃผ๊ณ ๋ฐ๋ ๊ฒ์ฒ๋ผ ๋ณดํ์ํ ๊ธฐํํฐ์ฝ๋
์ฃผ๊ณ ๋ฐ์ ๋ ์ด ์ฌ์ง.๋ฐ์์ โ์ ์ ์ฌ์ ์ ์์ฌํ๋ ๊ฒ ๊ฐ๋คโ19์ผ ์์ธ์์ฒญ ๋ธ๋ฆฌํ๋ฃธ. ๋ฐ์์ ์์ฅ์ด โ์๋ฏผ ์ฃผ๊ฑฐ์์ ๋์ฑ
โ์ ๋ฐํํ ๋ค ํ
๊ธฐ์๊ฐ โ์ฌ์ ์ฌ๋ถ์ ์๊ด์์ด ๊ณํ์ ์ถ์งํ๋ ๋ฐ ๋ฌด๋ฆฌ๊ฐ ์๊ฒ ๋๋โ๊ณ ์ง๋ฌธ. 2018๋
๊น์ง ๋ฌ์ฑํ๊ฒ ๋ค๋ ๊ฒ์ ์ฌ์ ์ ์ผ๋์ ๋ ๊ณต์ฝ ์๋๋๋
์๊ธฐ. ๋ฐ ์์ฅ์ โ์ ์ ์ฌ์ ์ ์๋นํ ์๋ฌธ์ ๊ฐ๊ณ ๊ณ์ ๊ฒ ๊ฐ๋คโ๋ฉด์โฆ.KT ๊ด๊ณ ์ ์ฝง์์ผ ์ธํ์ โ์ง๋๋๊ณคโ?KT๊ฐ ๋ฐฉ์ ์ค์ธ โ์ฌ๋
๊ด๋์ญ LTE-์งํ์ฒ ํธโ ๊ด๊ณ ๊ฐ ๋
ผ๋. ๋ชจ์๋ฅผ ์๋ฑํ๊ฒ ์ฐ๊ณ ์ฝง์์ผ์ ๊ธฐ๋ฅธ ์์ ์จ๊ฐ โ๊ด๋์ญ, ๋นจ๋ผ์ ๋นจ๋ผโ๋ผ๊ณ ๋งํ์ KT ๋ชจ๋ธ์ด โ๋ชจ๋
์งํ์ฒ ์์์ ๋ค ๋๋๋?โ๊ณ ๋ฌป๊ณ โ์ ๋๋๊ตฌ๋โ๋ผ๊ณ ๋งํ๋๋ฐ, ์ด ์์ ์จ๊ฐ LG์ ํ๋ฌ์ค ๊ด๊ณ ๋ชจ๋ธ์ธ ์ง๋๋๊ณค์ ๋ฎ์์ผ๋.์ ํ ๋ฒ ์ ์ค์โ์ธ๋ก
๋ง๋ฒโ์ ํตํ ๊น?
- source_sentence: ๋ฏธ์ธ๋จผ์ง์ ์ํ ์งํ์ด ์๋ ๊ฒ์ ๋ฌด์์ธ๊ฐ?
sentences:
- ๋ฏธ์ธ๋จผ์ง๊ฐ ๊ธฐ์น์ ๋ถ๋ฆฌ๊ณ ์๋ค. ๋ฏธ์ธ๋จผ์ง๋ ๋ชธ์์ ์์ด๋ฉด ํ์ ํ๊ด ๋ฑ์ ๋ฌธ์ ๋ฅผ ์ผ์ผํฌ ์ ์๋ค. ํธํก๊ธฐ ์งํ์์ ๊ฒฝ์ฐ ๊ธฐ์นจ, ์ฒ์ ์ฆ์์ด
์
ํ๋๊ธฐ๋ ํ๋ค. ์ธ์ถ ์ ๋ฏธ์ธ๋จผ์ง ์ฃผ์๋ณด ๋ฐ๋ น ์ฌ๋ถ๋ฅผ ํ์ธํ๊ณ ๊ฐ์ข
๊ฑด๊ฐ ํผํด๋ฅผ ์ค์ด๊ธฐ ์ํด ๋
ธ๋ ฅํด์ผ ํ๋ค.๋ฏธ์ธ๋จผ์ง๋ ๊ณต๊ธฐ ์ค์ ๋ ๋์๋ค๋๋
์ค๊ธ์ ๋ฑ์ ๋งํ๋ค. ์ง๋ฆ์ด 10๋ง์ดํฌ๋ก๋ฏธํฐ(ใ, 1ใ=100๋ง๋ถ์ 1m)๋ณด๋ค ์์ ํ๋ ํ๊ด์ผ๋ก ๋ค์ด๊ฐ ์ ์๋ค. ๋ฏธ์ธ๋จผ์ง ๋
ธ์ถ์ด ์ฌ๋ง๋ฅ ์
๋์ธ๋ค๋ ์ฐ๊ตฌ ๊ฒฐ๊ณผ๋ ์๋ค. ๊ฐ์๊ธฐ ๋ง์ ์์ ๋ฏธ์ธ๋จผ์ง์ ๋
ธ์ถ๋๋ฉด ๊ธฐ์นจ, ํธํก๊ณค๋ ๋ฑ์ ์ฆ์์ ํธ์ํ ์ ์๋ค. ์ฒ์์ด ์
ํ๋๊ณ ๋ถ์ ๋งฅ์ด ์๊ธฐ๊ธฐ๋
ํ๋ค.๋ฏธ์ธ๋จผ์ง๋ก ์ธํ ๊ฑด๊ฐํผํด๋ฅผ ๋ง๊ธฐ ์ํด์๋ ์ธ์ถํ ๋ ๋ง์คํฌ, ๋ณดํธ์๊ฒฝ, ๋ชจ์ ๋ฑ์ ์ฐฉ์ฉํ๋ ๊ฒ์ด ์ข๋ค. ์ต์ฒ์
๊ฐ๋๊ฒฝํฌ๋๋ณ์ ํธํก๊ธฐ๋ด๊ณผ
๊ต์๋ โ๋ฏธ์ธ๋จผ์ง๋ ์ฃผ๋ก ํธํก๊ธฐ๋ฅผ ํตํด ์ฒด๋ด๋ก ๋ค์ด์จ๋คโ๋ฉฐ โ๋ง์ฑํ์์ฑํ์งํ ๋ฑ ๋ง์ฑ ํธํก๊ธฐ ์งํ์๋ ์ธ์ถ ์ ํ๊ฒฝ๋ถ ์ธ์ฆ๋งํฌ๊ฐ ์๋ ๋ฐฉ์ง๋ง์คํฌ๋ฅผ
์ฐฉ์ฉํด์ผ ํ๋คโ๊ณ ์กฐ์ธํ๋ค. ๋๊ฐ๋ค ๋์์ค๋ฉด ์ค์๋ฅผ ํด ๋จธ๋ฆฌ์นด๋ฝ์ด๋ ์ท ๋ฑ ๋ชธ์ ๋จ์ ์๋ ๋ฏธ์ธ๋จผ์ง๋ฅผ ์์ ์ผ ํ๋ค.๋ฏธ์ธ๋จผ์ง์ ํจ๊ป ์ธ๊ท ๋ฑ์ด
ํธํก๊ธฐ๋ฅผ ํ๊ณ ๋ชธ์์ผ๋ก ๋ค์ด์ค๊ธฐ๋ ํ๋ค. ์ด๋ ํธํก๊ธฐ๊ฐ ๊ฑด์กฐํ๋ฉด ์ธ๋ถ์์ ์นจํฌํ ๊ท ์ ๋ฐฐ์ถํ๋ ๊ธฐ๋ฅ์ด ๋จ์ด์ง๋ค. ์ด ๋๋ฌธ์ ํธํก๊ธฐ๋ฅผ ์ด์ดํ๊ฒ
์ ์งํด์ผ ํ๋ค. ํ๋ฅด๋ ๋ฌผ์ ์ฝ๋ฅผ ์์ฃผ ์ป์ผ๋ฉด ๋ฏธ์ธ๋จผ์ง๋ ์ธ๊ท ๋ฑ์ด ๋ฐ์ผ๋ก ๋๊ฐ๋ ๋ฐ ๋์์ด ๋๋ค. ๋ง์ฑ ํธํก๊ธฐ ์งํ์ ์๋ ํ์๋ ๋ชฉ ์์ด
๊ฑด์กฐํ๋ฉด ๊ธฐ์นจ ๋ฑ์ ์ฆ์์ด ์ฌํด์ง ์ ์๋ค. ๋ฌผ์ ๋์ธ ์ ์ ๋ ์ฑ๊ฒจ ๋ง์
์ผ ํ๋ค.์ง์์๋ง ์๋ค๊ณ ์์ฌํด์ ์ ๋๋ค. ์ฒญ์ํ ๋๋ ์ฐฝ๋ฌธ์ ๋ซ๊ณ
ํ๋ ๊ฒ ๋ซ๋ค. ๋ง์ฑ ํธํก๊ธฐ ์งํ์๋ผ๋ฉด ์ผ๋ฐ ์ฒญ์๊ธฐ ๋์ ๋ฏธ์ธ๋จผ์ง๋ฅผ ๊ฑธ๋ฌ์ฃผ๋ ํน์ํํฐ๊ฐ ๋ฌ๋ฆฐ ์ง๊ณต์ฒญ์๊ธฐ๋ฅผ ์ฌ์ฉํด์ผ ํ๋ค. ์นดํซ์ด๋ ์นจ๊ตฌ๋ฅ์๋
๋ฏธ์ธ๋จผ์ง๊ฐ ์ฝ๊ฒ ์์ผ ์ ์๋ค.์ด๋ฅผ ์๋ฐฉํ๊ธฐ ์ํด ์ฌ์ ์ฌ์ง ์นจ๊ตฌ๋ฅ ๋ฑ์ ์๋ฉ์ฅ์ ๋ฃ๊ฑฐ๋ ๋ฎ๊ฐ๋ฅผ ์์ ๋๋ ๊ฒ์ด ์ข๋ค. ๋ฏธ์ธ๋จผ์ง ๋๋๊ฐ ๋ฎ์์ง๊ฑฐ๋
๋จผ์ง ์ฃผ์๋ณด๊ฐ ํด์ ๋๋ฉด ์ฐฝ๋ฌธ์ ์ด์ด ํ๊ธฐํด์ผ ํ๋ค. ์นจ๊ตฌ๋ฅ ๋ฑ๋ ํธ์ด ์ค๋ด์ ์์ธ ๋ฏธ์ธ๋จผ์ง๋ฅผ ์ ๊ฑฐํด์ผ ํ๋ค.
- ํด๊ฑฐ ์๊ธฐ์ ์ฒํ ์๋์ฃผ๊ฑฐ๋น๊ณค๊ฐ๊ตฌ๋ฅผ ์ง์ํ๊ธฐ ์ํด ๊ธด๊ธ์์์ฃผํ ์ฌ์
์ด ์์๋๋ค. ์ด๋ก์ฐ์ฐ์ด๋ฆฐ์ด์ฌ๋จ(ํ์ฅ ์ด์ ํ), ํ์ค์ผํ(๋ํ์ด์ฌ ๊น์ฅ์ฐฌ),
๊ตฌ๋ก๊ตฌ์ฒญ(๊ตฌ์ฒญ์ฅ ์ด์ฑ), ์์ธ์ฃผํ๋์๊ณต์ฌ(์ดํ โSH๊ณต์ฌโ, ์ฌ์ฅ ๊น์ธ์ฉ)๋ 24์ผ(๋ชฉ) ๊ตฌ๋ก๊ตฌ์ฒญ ๋ฅด๋ค์์คํ์์ ใ๊ตฌ๋ก๊ตฌ ๊ธด๊ธ์์์ฃผํ ์ฌ์
ใ์
์
๋ฌดํ์ฝ์ ์งํํ๋ค. ๋ณธ ์
๋ฌดํ์ฝ์ ํตํด ์ง๋ 7์ 16์ผ ์ํ๋ ใ์์ธํน๋ณ์ ์๋ ์ฃผ๊ฑฐ๋น๊ณค ํด์๋ฅผ ์ํ ์ง์ ์กฐ๋ก์ใ์ ๋ฐ๋ฅธ ์๋ ๋์ ์ฃผ๊ฑฐ
์ ์ฑ
์ ํ์คํ ํ ์ ์๊ฒ ๋์๋ค. ์์ธ์ ์ผ๋ถ ์์น๊ตฌ์ ์ง์ญ ์ฃผ๊ฑฐ๋ณต์ง์ผํฐ์์๋ ๊ธฐ์กด์ ์ฝ 40ํธ์ ๊ธด๊ธ์์์ฃผํ์ ์ด์ํ๋ฉฐ ๊ฐ์์ค๋ฝ๊ฒ ํด๊ฑฐ
์๊ธฐ์ ์ฒํ ๊ฐ๊ตฌ๋ฅผ ์ํด ์์ ์ฃผ๊ฑฐ ์์ค์ ์ ๊ณตํด์๋ค. ํ์ง๋ง ๋ฐ์งํ ์ฃผํ ๋๋ ๋
ธํ ๋ ์ฃผํ์ ๊ธด๊ธ์์์ฃผํ์ผ๋ก ํ์ฉํ๊ฑฐ๋ ๊ฐ์กฑ ๋จ์๋ก ์ํํ
์ ์๋ ์ข์ ์ฃผํ์ธ ๊ฒฝ์ฐ๊ฐ ๋ง์๋ค. ์ด ๋๋ฌธ์ ์๋์ด ์๋ ๊ฐ๊ตฌ๋ฅผ ์ํ ์์ ํ ๊ธด๊ธ์์์ฃผํ์ด ํ์ํ ์ํฉ์ด์๋ค. ์ด๋ฒ ์
๋ฌดํ์ฝ์ ๋ฐ๋ผ ํ์ค์ผํ์ด
๊ฐ์ , ๊ฐ๊ตฌ ๋ฑ ๊ธด๊ธ์์์ฃผํ์ ์ํด ํ์ํ ๋ฌผํ์ ํ์ํ๋ฉฐ ์ด๋ก์ฐ์ฐ ์ด๋ฆฐ์ด์ฌ๋จ์ด ํ์๊ธ ์งํ์ ๋ด๋นํ๋ค. SH๊ณต์ฌ๋ ๊ธด๊ธ์์์ฃผํ ์ด์์ ์ํ
๋งค์
์๋์ฃผํ์ ์ ์ ์ ๊ณตํ๊ณ ๊ตฌ๋ก๊ตฌ์ฒญ์ ๊ธด๊ธ์์์ฃผํ ์ด์๊ณผ ํจ๊ป ์ฃผ๊ฑฐ์๊ธฐ๊ฐ๊ตฌ์ ์ฃผ๊ฑฐ ์ํฅ์ ์ํด ๋
ธ๋ ฅํ๊ฒ ๋๋ค. ์์ธ์ ์ฌ๋ ์ง์(๊ฐ๋ช
)์ด๋ค
๊ฐ์กฑ์ ์ฝ๋ก๋19 ์๊ธฐ๋ก ๋ถ๋ชจ๋์ ์๋์ด ์ค์ด๋ค์ด์ 5๊ฐ์์ ์์ธ๊ฐ ๋ฐ๋ ค ํด๊ฑฐ ์๊ธฐ์ ๋์ฌ์์์ง๋ง ์ด๋ฒ ๊ธด๊ธ์์์ฃผํ ์ฌ์
์ ํตํด ์ผ์์ ์ผ๋ก
๊ฑฐ์ฃผ์ง๋ฅผ ๋ง๋ จํ ์ ์๊ฒ ๋๋ค. ๊ฑฐ์ฃผํ๋ ๋์ ๋ค์ํ ์ง์์ฒด๊ณ๋ฅผ ์ฐ๊ณํด ์์ ์ ์ธ ์ฃผ๊ฑฐ ๊ณํ์ ์๋ฆฝํ๋ค. ํํธ ใ์์ธํน๋ณ์ ์๋ ์ฃผ๊ฑฐ๋น๊ณค ํด์๋ฅผ
์ํ ์ง์ ์กฐ๋ก์ใ ์กฐ๋ก ์ ์ ๋ฐ ์ด๋ฒ ๊ธด๊ธ์์์ฃผํ ์ฌ์
์ ์ฐธ์ฌํ๋ ์ด๋ก์ฐ์ฐ์ด๋ฆฐ์ด์ฌ๋จ ์ด์ ํ ํ์ฅ์ โ์ฝ๋ก๋ ์ํฉ์ด ์ฅ๊ธฐํ ๋๋ฉด์ ํด๊ฑฐ ์๊ธฐ์
๋์ธ ๊ฐ๊ตฌ๊ฐ ๋๊ณ ์์ผ๋ฉฐ, ์๋์ ๋๋ฐํ ๊ฐ๊ตฌ๋ ํด๊ฑฐ ์ํฉ์์ ๊ฒช๋ ์ด๋ ค์์ด ์ผ๋ฐ ๊ฐ๊ตฌ์ ๋นํด ํฌ๋ค. ์ด๋ฒ ๊ตฌ๋ก๊ตฌ์์ ์ฌ์
์ ์์์ผ๋ก ๋ ๋ง์
์์น๊ตฌ์์ ์ฌ์
์ด ์งํ๋๊ธธ ๊ธฐ๋ํ๋ค. ๋, ๊ธด๊ธ์์์ฃผํ์ ์
์ฃผํ ์๊ธฐ ๊ฐ๊ตฌ๊ฐ ๊ณต๊ณต์๋์ฃผํ ๋ฐ ์ผ๋ฐ ์ฃผ๊ฑฐ๋ก์ ์ฃผ๊ฑฐ ์ํฅ๊น์ง ์ด๋ฅผ ์ ์๋๋ก ์๋น์ค๋ฅผ
์ ๊ณตํ๋ ๊ฒ๋ ์ค์ํ ๊ฒ์ด๋ค. โ๋ผ๊ณ ๊ธฐ๋๊ฐ์ ํ์ํ๋ค.
- ์ง๋ 12์ผ ์๊ณต์ฌ ์ ์ ์
์ฐฐ์ ์ํํ ์์ธ ํ๋ฆํ๋์ํํธ ์ฌ๊ฑด์ถ์กฐํฉ์ ๊ฒฐ๊ตญ ๋๋ค์ ๊ณต์ฌ๋ฅผ ๋งก์ ์
์ฒด๋ฅผ ๋ฝ๋ ๋ฐ ์คํจํ๋ค. ์ด๋ฏธ ์ธ ์ฐจ๋ก๋
์ ์ฐฐ๋ผ ์ด๋ฒ์๋ ์์๊ณ์ฝ ๋ฐฉ์์ผ๋ก ๊ฑด์ค์ฌ๋ฅผ ์ง๋ช
ํ ์ ์์์ง๋ง ๊ทธ๋ง์ ๊ด์ฌ์ ๋ณด์ด๋ A๊ฑด์ค์ฌ๊ฐ ์ ์์ ์ ์ถ์ ํฌ๊ธฐํ๊ธฐ ๋๋ฌธ์ด๋ค. ์กฐํฉ ๊ด๊ณ์๋
โ์ด์ฌํ๋ฅผ ์ด์ด ์์ผ๋ก๋ ํ ์
์ฒด๊ฐ ๋จ๋
์ผ๋ก ์
์ฐฐ์ ์ฐธ์ฌํ๋๋ผ๋ ์๊ณต์ฌ ์ ์ ์ ์ํ ์ฃผ๋ฏผ์ดํ์ ์์ ํด ํต๊ณผ์ํฌ ์ ์๋๋ก ํ ๊ณํโ์ด๋ผ๊ณ ๋งํ๋ค.์ฌ๊ฐ๋ฐยท์ฌ๊ฑด์ถ์กฐํฉ๋ค์ด
ํ ์ง์ ํ๊ณ ์ ์ง์ผ๋ก ์ง์ด์ค ๊ฑด์ค์ฌ(์๊ณต์ฌ)๋ฅผ ๋ชจ์๋ ๋ฐ ์ ๋ฅผ ๋จน๊ณ ์๋ค. ๋ถ๋์ฐ ์์ฅ ์นจ์ฒด๊ฐ ์ฅ๊ธฐํ๋๋ฉด์ ์๊ธ์ฌ์ ์ด ๋น ๋ฏํด์ง ๊ฑด์ค์ฌ๋ค์ด
๊ณ์ฝ์กฐ๊ฑด์ด ์ ๋ฆฌํ๊ฑฐ๋ ๋ถ์์ฑ์ด ๋ฐ์ด๋ ๋จ์ง๋ง ๊ณจ๋ผ ์์ฃผํ๊ธฐ ๋๋ฌธ์ด๋ค. ์ต๊ทผ ์ฉ์ธ2๊ตฌ์ญ ์ฌ๊ฑด์ถ์ฌ์
๋ ์ฐธ์ฌ ์
์ฒด๊ฐ ์์ด ์๊ณต์ฌ ์ ์ ์
์ฐฐ์ด ์ ์ฐฐ๋๋ค.
์ค๋ 22์ผ ์
์ฐฐ์ด ์ค์๋ ์์ ์ด๋ ๋ถ์ฒ ์์ข
3D๊ตฌ์ญ ๋์ํ๊ฒฝ์ ๋น์ฌ์
๋ ์์ ๊ฐ์ตํ ํ์ฅ์ค๋ช
ํ์์ ๊ฑด์ค์ฌ๊ฐ ๋จ ํ ๊ณณ๋ ๋ํ๋์ง ์์ ์
์ฐฐ์ด
์๋ ์ ์ฐฐ๋๋ค. ์์ธ ์์1๊ตฌ์ญ ์ฌ๊ฑด์ถ์กฐํฉ๋ ์
์ฐฐ์ด ํ๋ฅ ์ค์ด๋ค. ์์ธ ๊ตฌ์ฐ1๊ตฌ์ญ๊ณผ ํ์ 3๊ตฌ์ญ์ ์๋
์ ์๊ณต์ฌ๋ฅผ ๊ต์ฒดํ๋ ค๊ณ ๊ณ์ฝ์ ํด์งํ๋ค๊ฐ
์ง๊ธ๊น์ง ๋ค๋ฅธ ๊ฑด์ค์ฌ๋ฅผ ์ฐพ์ง ๋ชปํ๊ณ ์๋ค.โ์๊ณต์ฌ ๋ชจ์๊ธฐโ๊ฐ ์ด๋ ค์์ ์ฒํ์ ์ฃผ๋ฏผ๋ค์ด ํ ๋ฒ ๊ฑฐ์ ํ๋ ์๊ณต์ฌ์ ๋ค์ โ๋ฌ๋ธ์ฝโ์ ๋ณด๋ด๋ ๊ฒฝ์ฐ๋
๋ํ๋๊ณ ์๋ค. ์์ธ ์๋๋ ๋๋ฆผ์ํํธ๋ ๋ํ์ฃผํ ๋น์จ ๋ฑ์ ๋๋ฌ์ผ ์ค๊ณ๋ณ๊ฒฝ ๊ฑด์ผ๋ก B๊ฑด์ค์ฌ์ ๋ณธ๊ณ์ฝ ์ง์ ์ ๊ณ์ฝ์ ํด์งํ ๋ฐ ์๋ค. ์ดํ
์๋ก์ด ์๊ณต์ฌ๋ฅผ ์ฐพ์๋์ฐ์ง๋ง ์
์ฐฐ์ด ๋ฒ๋ฒ์ด ์ ์ฐฐ๋๋ฉด์ ๊ฒฐ๊ตญ B์
์ฒด์ ๋ค์ ์์ ๋ด๋ฐ์๋ค. ๊ธ๋ก๋ฒ ๊ธ์ต์๊ธฐ ์ดํ ์กฐํฉ์ด ์๊ณต์ฌ๋ฅผ ๋ฐ๊พธ๋ ์ฌ๋ก๋
์ ์ง ์์๋ค. ์๊ณต์ฌ๋ ๊ธ์ตํ์ฌ์์ ๋์ ๋น๋ ค ์กฐํฉ์ ์ฌ์
๋น ๋ฑ์ผ๋ก ๋์ฌํด ์ฃผ๋๋ฐ ์ฌ๋ฌด์ํ๊ฐ ๋๋น ์ง ๊ฑด์ค์ฌ๋ ์ฌ์
์ฑ์ด ๋จ์ด์ง๋ ์ฌ์
์ฅ์์ ์ ๋๋ก
์๊ธ์ด ์ง์๋์ง ์์๊ธฐ ๋๋ฌธ์ด๋ค. ๊ฑด์ค์ฌ๋ค๋ ๊ณผ๊ฑฐ์ฒ๋ผ ํฐ ์๊ธ์ ์์๋ถ์ผ๋ฉฐ ์ฌ์
์์ฃผ๋ฅผ ์ํ ๊ธฐ๋๊ถ ๊ตฌ์ถ์๋ง ๋งค๋ฌ๋ฆฌ์ง ์๋๋ค. ์์ โ๋ ์ฌ์
โ์
๋์ ํ๊ฒ ๊ณ ๋ฅธ๋ค๋ ์๋ฏธ๋ค. ํ ๋ํ ๊ฑด์ค์ฌ ์์
๋ด๋น ์๋ฌด๋ โ์ญ์ธ๊ถ๋ ์๋๊ณ ์์ธ๋ ๋จ์ด์ง๋๋ฐ ์กฐํฉ์๋ค์ด ๋น์ผ ์ผ๋ฐ ๋ถ์๊ฐ๋ฅผ ๊ณ ์งํ๋ฉด ๋ต์ด
์๋คโ๋ฉฐ โ์ฌ๊ฐ๋ฐยท์ฌ๊ฑด์ถ์ ์ฃผ๋ฏผ ๊ฐ ๊ฐ๋ฑ์ผ๋ก ์ฌ์
์ด ๋ฆ์ด์ง๋ ๋ฑ ๋ฆฌ์คํฌ๊ฐ ํฌ๊ธฐ ๋๋ฌธ์ ๊ฑด์ค์ฌ๋ค์ ์ ์คํ ์ฌ์
์ฅ์ ๊ณ ๋ฅด๊ณ ์๋คโ๊ณ ์ค๋ช
ํ๋ค.
- source_sentence: ์ค์ค๋ง ํ๋ฅดํฌ์ ์ํ ๊ฐ ์ถ์๋ ์์ธ์?
sentences:
- '๋ฏธ๊ตญ๊ณผ ์๊ตญ, ํ๋์ค ๋ฑ์ง์์๋ ๋ฏผ์ฃผ์ฃผ์๊ฐ ๋ฐ์ ํ๋ค. ์ผ๋ณธ์ ์ค์ธ์๋์์ ๊ตฐ๋์ ๋ํ ์ง๋ฐฐ๊ถ์ ํ๊ณ ํ ํ ์ ์๊ฒ ๋์๋ค.
ํํธ, ๋
์ผ์ ๋ฒ ๋ฅด์ฌ์ ์กฐ์ฝ์ผ๋ก ๋ง๋ฏธ์์ ๋ฐ์ฑ๋ณด๋ค ์ง๋
ํ ๊ฐ๋๊ณผ ๋ฐฐ์๊ธ์ ๋ํ ๊ฒ์ ์๋ฌ๋ ธ์ผ๋ฉฐ ์ค์ค๋ง ํ๋ฅดํฌ๋ ์ธ๋ธ๋ฅด ์กฐ์ฝ์ ๋งบ์์ผ๋ก์จ ์ํ ๊ฐ
ํฌ๊ฒ ์ค์ด๋ค์๋ค(1922๋
ํด์ฒด, 1923๋
ํฐํค ๊ณตํ๊ตญ ์๋ฆฝ). ์ค์คํธ๋ฆฌ์์ ํ๊ฐ๋ฆฌ๋ ๊ฐ๊ฐ ์์ ๋ฅด๋งน ์กฐ์ฝ, ํธ๋ฆฌ์๋ ์กฐ์ฝ์ ๋งบ์์ผ๋ก์จ ์ํ ๊ฐ
ํฌ๊ฒ ์ค์ด๋ค์๋ค. ๋ถ๊ฐ๋ฆฌ์๋ ๋์ด ์กฐ์ฝ์ผ๋ก ๋จ๋๋ธ๋ฃจ์๋ฅผ ๋ฃจ๋ง๋์์ ๋ผ์ด์ฃผ์๋ค.
์ดํ๋ฆฌ์๋ ์น์ ๊ตญ์ด์์ผ๋ ์ฐํฉ๊ตญ์๊ฒ ์ํ ๋ฅผ ๋ณด์ฅ๋ฐ๊ธฐ๋์ปค๋
๋๋๋ฅผ ๋ฐ์๋ค. ๊ฒฐ๊ตญ 1922๋
์ ๋ฒ ๋ํ ๋ฌด์๋ฆฌ๋์ ์ํ ํ์์คํธ ์ ๊ถ์ด ์๋ฆฝ๋๋ค.
์ดํ๋ฆฌ์์ ๋ง์ฐฌ๊ฐ์ง๋ก ์ค๊ตญ์ ์ฐํฉ๊ตญ์์๋ ๋ถ๊ตฌํ๊ณ ์ฐ๋ฅ ๋ฐ๋์ ๋ํ ์ด๊ถ์ ๋๋ ค๋ฐ์ง ๋ชปํ์๋ค. ์ฐ๋๋ก ์์จ์ ๋ฏผ์กฑ์๊ฒฐ์ฃผ์ ์์น์ ๋ฐ๋ผ ์ค์์ ๋ฝ์
๋ง์ ๊ตญ๊ฐ๋ ๋
๋ฆฝํ์์ผ๋ฉฐ, ๋
๋ฆฝ์ ์กฐ๊ฑด์ผ๋ก ์๊ตญ์ ๋์๋ ์ธ๋๋ ๊ทธ ์ฝ์์ด ๋ฌด์ฐ๋์ ์ง์์ ์ธ ํฌ์ ์ด๋์ ์์ํ๋ค.
ํํธ, ์ฐ๋๋ก ์์จ ๋ฏธ๊ตญ ๋ํต๋ น์ ๋ฏผ์กฑ ์๊ฒฐ์ฃผ์๋ฅผ ์ ์ฐฝํ์์ผ๋ฉฐ, ์ ์์ ๋ฐฉ์ง์ ์ธ๊ณ์ ํํ๋ฅผ ์ํด ๊ตญ์ ์ฐ๋งน์ ์ค๋ฆฝํ ๊ฒ์ ์ ์ํ์๋ค. ์ด๋ก์จ
๊ตญ์ ์ฐ๋งน์ด ์ค๋ฆฝ๋์์ผ๋, ์ ์ ๋ฏธ๊ตญ์ ์ํ์ ๋ฐ๋๋ก ๊ฐ์
์ ์คํจํ์๋ค. ๊ฒฐ๊ตญ ๋ค์ ๊ณ ๋ฆฝ์ ๊ธธ์ ๊ฑธ์๋ค.'
- ๋ด๊ณผ ์ธ๊ณผ ์์์ฒญ์๋
๊ณผ ์ฐ๋ถ์ธ๊ณผ ๋ง์ทจํต์ฆ์ํ๊ณผ ๋ฑ 5๊ฐ โํ์ ์ง๋ฃ๊ณผ๋ชฉโ ์ ๋ฌธ์(้ซ)๋ฅผ ๋ชจ๋ ๊ฐ์ถ์ง ๋ชปํ ์ยท๊ตฐยท๊ตฌ๊ฐ ์ ๊ตญ 251๊ณณ ๊ฐ์ด๋ฐ
27๊ณณ์ ๋ฌํ๋ค. ๊ตํต์ฌ๊ณ ๋ ์ฌ๊ทผ๊ฒฝ์ ๋ฑ์ผ๋ก ์๊ธ์ค์ ์ฐพ์ ํ์๋ฅผ ์ ๋ดํ๋ ์๊ธ์ํ๊ณผ ์ ๋ฌธ์๊ฐ ์๋ ์ง๋ฐฉ์์น๋จ์ฒด๋ 51๊ณณ์ด์๋ค. โถ๊ด๋ จ๊ธฐ์ฌ
A3๋ฉด๊ฑด๊ฐ๋ณดํ์ฌ์ฌํ๊ฐ์์ ์ยท๊ตฐยท๊ตฌ๋ณ โ์ ๋ฌธ๊ณผ๋ชฉ๋ณ ์ ๋ฌธ์ ์ธ์ ํํฉโ๊ณผ โํ์๊ณผ๋ชฉ๋ณ ์์ ํํฉโ์ ๋ฐ๋ฅด๋ฉด ์ ๋ฌธ์ ์๋ ์ต๊ทผ 5๋
(2009~2013๋
)
์ฌ์ด์ 1๋ง๋ช
๋๊ฒ ์ฆ๊ฐํ๋๋ฐ๋ ํ์ ์ง๋ฃ๊ณผ๋ชฉ ์ ๋ฌธ์๋ฅผ ๋ค ๊ฐ์ถ์ง ๋ชปํ ์ง๋ฐฉ์์น๋จ์ฒด๋ ์คํ๋ ค 4๊ณณ ๋์๋ค. ํ์ ์ง๋ฃ๊ณผ๋ชฉ์ด๋ โ์๊ธ์๋ฃ์ ๊ดํ
๋ฒ๋ฅ ์ํ๊ท์นโ์์ ์๊ธ์๋ฃ๊ธฐ๊ด์ ๋น์ง ์ ๋ฌธ์๋ฅผ ๋ฐ๋์ ๋๋๋ก ํ 5๊ฐ ์ง๋ฃ๊ณผ๋ชฉ์ ๋งํ๋ค.๊ฒฝ๋ถ ์์๊ตฐ์ ๋ด๊ณผ๋ฅผ ์ ์ธํ ๋ชจ๋ ํ์ ์ง๋ฃ๊ณผ๋ชฉ์์
์ ๋ฌธ์๊ฐ ํ ๋ช
๋ ์์๋ค. ๊ฐ์ ์์๊ตฐ์ ์ฐ๋ถ์ธ๊ณผ ์์์ฒญ์๋
๊ณผ ๋ง์ทจํต์ฆ์ํ๊ณผ ๋ฑ 3๊ฐ ํ์๊ณผ๋ชฉ ์ ๋ฌธ์๊ฐ ์๋ค. ์ธ๊ณผ ์ ๋ฌธ์๊ฐ ์๋ ์ง์์ฒด๋
๊ฒฝ๋ถ ๋ดํ๊ตฐยท์ธ๋ฆ๊ตฐ ๋ฑ 3๊ณณ์ด์๊ณ ๋ง์ทจํต์ฆ์ํ๊ณผ ์ ๋ฌธ์๊ฐ ์๋ ๊ณณ๋ ๊ฐ์ ์๊ตฌ๊ตฐ, ์ถฉ๋ถ ๋จ์๊ตฐ ๋ฑ 9๊ณณ์ ๋ฌํ๋ค.์์์ฒญ์๋
๊ณผ ์ ๋ฌธ์๋ฅผ ์ฐพ์
์ ์๋ ์ง์์ฒด๋ ์ถฉ๋ถ ๋ณด์๊ตฐยท๊ดด์ฐ๊ตฐ๊ณผ ์ ๋ถ ์ง์๊ตฐ ๋ฑ 14๊ณณ, ์ฐ๋ถ์ธ๊ณผ ์ ๋ฌธ์๊ฐ ์๋ ๊ณณ์ ๊ฒฝ๋ถ ๊ณ ๋ น๊ตฐยท์์ฑ๊ตฐ๊ณผ ์ ๋จ ๊ตฌ๋ก๊ตฐ ๋ฑ 12๊ณณ์ด์๋ค.
- โโ์ปดํจํฐ๊ฐ ๊ทธ๋ฆฌ ์ข์ผ๋ฉด ํ๊ต๋ฅผ ๊ทธ๋ง๋๋ผโ๋ ์๋ง์ ์กฐ์ธ์ด ํ
๋ธ๋ฌ ์ค๋ฆฝ์ ๋ฐ๋จ์ด ๋๋ค.โ์ผํ๊ฐ ์ต๊ทผ ์ธ์ํ ๋ง์ดํฌ๋ก ๋ธ๋ก๊น
์ฌ์ดํธ โํ
๋ธ๋ฌโ
์ฐฝ์
์ ๋ฐ์ด๋น๋ ์นดํ(์ฌ์ง)์ ์ฑ๊ณต ์์ธ์ ๋๊ณ ์ฃผ์ ์ธ์ ๋ค์ด ์ ํ ๋ง์ด๋ค. ์ฌ๋ ๋ถ๋ชจ์ ๋ฌ๋ฆฌ ์ปดํจํฐ์ ๋น ์ ธ ์ด๋ 14์ธ ์๋
์ด ์์ ์ด ์ํ๋
์ผ์ ๋ชฐ๋ํ ์ ์๋๋ก ๊ณผ๊ฐํ๊ฒ ํ๊ต ์คํด๋ฅผ ๊ถ์ ํ ์๋ง์ ๊ฒฐ์ ์ด 20๋ ์ต๋ง์ฅ์ ํ์์ ๋ฐ๊ฑฐ๋ฆ์ด ๋๋ค๋ ๊ฒ์ด๋ค.22์ผ ๋ด์ํ์์ค ๋ฑ์ ๋ฐ๋ฅด๋ฉด
๊ทธ๋ 2000๋
๋ด์์ ์ผ๋ฅ ๊ณต๋ฆฝํ๊ต์ธ ๋ธ๋กฑํฌ์ค๊ณผํ๊ณ ์ ๋ค๋
๋ค. ๋น์ 14์ธ์ธ ์นดํ๋ ๋จธ๋ฆฌ๊ฐ ์ด๋ช
ํ์ง๋ง, ๋ด์ฑ์ ์ธ ๋ฐ๋ค ํ๋ฃจ ์ข
์ผ ์ปดํจํฐ์
๋น ์ ธ ์ด์๋ค.๊ทธ์ ์๋ง ๋ฐ๋ฒ๋ผ ์์ด์ปค๋จผ์ โ์นดํ๋ 10๋ ์๋
์ด ๊ทธ๋ ๋ฏ ์ฌ์์น๊ตฌ์ ๋น๋์ค ๊ฒ์์ ์ข์ํ์ง๋ง, ์ปดํจํฐ๋งํผ ๊ทธ๋ฅผ ๋งคํน์ํค์ง๋ ๋ชปํ๋คโ๋ฉฐ
โ๊ทธ์ ์ด์ ์ ์ด๋ฆด ๊ณต๊ฐ์ด ํ์ํ๋คโ๊ณ ํ๊ณ ํ๋ค. ๋น์ ์ฌ๋ฆฝํ๊ต์ ๊ณผํ๊ต์ฌ์๋ ์์ด์ปค๋จผ์ ์๋ค์ด ํ๊ต๋ฅผ ์คํดํ๋ ๋์ ํ์ค์ฟจ์ ํตํด ํ์
์
๊ณ์ํ๋๋ก ํ๋ค. ์ด์ ์ ๊ธฐ์ [email protected]
- source_sentence: ๋
น์ง ํ๋ฆฌ๋ฏธ์ ๋จ์ง'๋ผ๊ณ ๋ถ๋ฆฌ๋ ์ํํธ์์ ๊ฑธ์ด์ ๊ฐ ์ ์๋ ์ญ์ ์ด๋ฆ์?
sentences:
- ์กฐํ์ต ํ๊ตญ์ ๋ ฅ ์ฌ์ฅ(์ฌ์ง)์ด ์์ธ ์ผ์ฑ๋ ํ์ ๋ณธ์ฌ ๊ฑด๋ฌผ์ ๋งค๊ฐํ์ง ์๊ฒ ๋ค๋ ๋ป์ ๋ด๋น์ณค๋ค. โ์ ๋งค๊ฐ ํ์ด์ โ์ด๋ผ๋ ์ ๋ถ์ ๊ณต๊ณต๊ธฐ๊ด ํ์ ๋์
์ด์ ๋ฐฉ์นจ๊ณผ ๊ฑฐ๋ฆฌ๊ฐ ์๋ ๋ฐ๋ค ์ผ์ฑ์๋ช
KB๊ธ์ต ๋ฑ์ด ์ด ๋ถ์ง๋ฅผ ๋งค์
ํ๊ธฐ ์ํด ๋ฌผ๋ฐ ๊ฒฝ์์ ๋ฒ์ด๋ ์ํฉ์ด์ด์ ํ์ฅ์ด ์์๋๋ค. ์กฐ ์ฌ์ฅ์ 29์ผ
์ง์๊ฒฝ์ ๋ถ ์ถ์
๊ธฐ์๋ค๊ณผ ๋ง๋ ํ์ ๋ณธ์ฌ ๊ฑด๋ฌผ์ ๋ํด โ(์ง๊ธ์ผ๋ก์๋) ๋งค๊ฐํ ์๊ฐ์ด ์๋คโ๋ฉฐ โ์ผ๋ฐ๋งค๊ฐ๋ณด๋ค๋ ํฅํ ๊ฐ๋ฐ์ ํตํด ์์ต์ ์ฐฝ์ถํ๋
๋ฐฉ์์ ์ ๋ถ์ ํ์ํ๊ฒ ๋คโ๊ณ ๋งํ๋ค. ํ์ ์ ๋ด๋
8์ ์ ๋จ ๋์ฃผ ํ์ ๋์๋ก ์ด์ ์ด ์์ ๋ผ ์๋ค. ์ ๋ถ ๋ฐฉ์นจ์ ๋ฐ๋ฅด๋ฉด ํ์ ๋์๋ก ๋ณธ์ฌ๋ฅผ ์ฎ๊ธฐ๋
๊ณต๊ณต๊ธฐ๊ด์ ์ด์ ์ ์ ๋ณธ์ฌ ๊ฑด๋ฌผ์ ๋งค๊ฐํด์ผ ํ๋ค. ์์ธ ์ผ์ฑ๋ ํ์ ๋ณธ์ฌ ๋ถ์ง๋ 7934ใก ๊ท๋ชจ๋ก ์๊ฐ 3์กฐ์์ผ๋ก ์ถ์ ๋๋ค. ์์ธ ๊ฐ๋จ๊ถ์
์์นํ ์ฌ์ค์ ๋ง์ง๋ง ๊ธ์ธ๋ผ๊ธฐ ๋
์ด๊ธฐ ๋๋ฌธ์ด๋ค. ์กฐ ์ฌ์ฅ์ ๋ ์ฐ๋ด ์ ๊ธฐ์๊ธ ์ถ๊ฐ ์ธ์ ๊ฐ๋ฅ์ฑ์ ๋ํด โ๋จ์ ์ ์ผ๋ก ์ด์ผ๊ธฐํ ์ ์์ง๋ง ํ์ฌ๋ก์๋
์ ๊ธฐ์๊ธ์ ์ถ๊ฐ๋ก ์ธ์ํ ์๊ฐ์ด ์๋คโ๊ณ ๋ฐํ๋ค. ์ ๊ธฐ์๊ธ ๋์ง์ ์ถ์์ ๊ด๋ จํด์๋ โ๋์ง์ ๋ฅผ ํตํด ๋ง๋ จํ ์ฌ์์ผ๋ก ๋น๋ฏผ๋ค์๊ฒ ์ธ๊ฒ ์ ๊ธฐ๋ฅผ
๊ณต๊ธํ๋ ๊ฒ์ ์ข๋คโ๋ฉด์๋ โ(์ผ๋ถ ๊ณ์ธต์ ๋์์ผ๋ก) ๊ณผ๋ํ ์๊ธ์ ์ฑ
์ ํ๋ ๊ฒ์ ๋ฌธ์ ๊ฐ ์๋ค๊ณ ๋ณธ๋คโ๊ณ ์ค๋ช
ํ๋ค.
- LG์ํ๊ฑด๊ฐ์ ํ๋ฐฉ ํ์ฅํ ๋ธ๋๋ โํโ๋ ์ง๋๋ฌ ๋ง ๋ชจ๋ธ ์ด์์ ์จ์ 11๋
์ฐ์์ผ๋ก ๊ณ์ฝ์ ๊ฐฑ์ ํ๋ค. ํ๊ฐ ์ฐ๋งค์ถ ์ฝ 4300์ต์(์ง๋ํด
๊ธฐ์ค)์ ๋ํ ๋ธ๋๋๋ก ์ฑ์ฅํ๊ธฐ๊น์ง ์คํ๊ถ ํ๋ฅ์คํ์ธ ์ด์จ์ ๊ณต๋ก๊ฐ ์ปธ๋ค๋ ์ด์ ์์๋ค. ํ์ ๋ํ ์ ํ์ธ โ๋น์ฒฉ์์ ์์ผ์คโ๊ฐ โ์ด์์ ์์ผ์คโ๋ผ๋
๋ณ์นญ์ผ๋ก ๋ถ๋ฆด ์ ๋๋ก ์์ธก์ ๋๋ํ ๊ด๊ณ๋ฅผ ์ด์ด์ค๊ณ ์๋ค.๋น ๋ฅด๊ฒ ๋ณํ๋ ์ ํ๋งํผ ๋ชจ๋ธ๋ ์์ฃผ ๋ฐ๋๋ ํ์ฅํ์
๊ณ์์ 10๋
์ด์ ์ฅ์ํ๋ โ๋๊ธฐ๋กโ์
์ด ์ฐ์์ธ์ด ์์ ๋ฑ์ฅํ๊ณ ์๋ค.์ด์จ ๋ชป์ง์์ ์ฅ์๋ชจ๋ธ๋ก 10๋
์งธ SK-โ
ก ๋ชจ๋ธ๋ก ํ๋ ์ค์ธ ๊นํฌ์ ์จ๊ฐ ๋ํ์ ์ด๋ค. โ๋์น์ง ์์ ๊ฑฐ์์โ๋ผ๋
๊น์จ์ ๊ด๊ณ ๋ฌธ๊ตฌ๋ SK-โ
ก์ ์์ง์ด ๋๋ค. ํ์ฌ ์ธก์ โSK-โ
ก์ ๊น์จ๋ ์ด์ ๋ธ๋๋์ ๋ชจ๋ธ์ ๊ด๊ณ๋ฅผ ๋์ด โ๊ฐ์กฑโ์ด๋ผ๊ณ ํํํด์ผ ํ ์ ๋โ๋ผ๊ณ
ํ๋ค. ๊ตญ๋ด ํ์ฅํ ๊ด๊ณ ์ญ์ฌ์ ์ต์ฅ์ ๊ด๊ณ ๋ชจ๋ธ์ ์ฑ์๋ผ ์จ๋ก ์๋ ค์ก๋ค. 1991๋
๋ถํฐ 2006๋
๊น์ง 15๋
๋์ ์ฝ๋ฆฌ์๋ ๋ชจ๋ธ๋ก ํ๋ํ๋ค.ํ์ฅํ
๊ด๊ณ ์ ์์ฃผ ๋ฑ์ฅํ๋ ์ ์งํ ์ด๋์ ์กํ๊ต ๋ฑ์ โํนA๊ธ ๋ชจ๋ธโ์์ ๋ถ๋ช
ํ์ง๋ง ๋ธ๋๋๋ฅผ ์ฌ๋ฌ ์ฐจ๋ก ๊ฐ์ํ๋ค. ์ ์จ๋ ์๋ฐ๋ ๋ผ๋ค์ฆ ํ์จ ์ผ๋ฆฌ
ํค๋ผ, ์ด์จ๋ ๋ผ๋ค์ฆ ์์ด์คํ ๋์ฝค ์จ, ์ก์จ๋ ์๋ฐ๋ ์ด๋์คํ๋ฆฌ ๋ผ๋ค์ฆ ๋ฑ ๋ค์ํ ๋ธ๋๋์ ๋ชจ๋ธ๋ก ํ๋ํ๋ค.๊นํํฌ ์จ๋ 2004๋
LG์ํ๊ฑด๊ฐ
์คํ ๋ชจ๋ธ๋ก ํ๋ํ๋ค๊ฐ 2006๋
์๋ชจ๋ ํผ์ํฝ ํค๋ผ๋ก ๋ฐ๊พธ๊ณ , 2011๋
๋ค์ ์คํ๋ก ๋ณต๊ทํ ๋
ํนํ ์ฌ๋ก๋ค. ์ด ๊ณผ์ ์์ ์๋ชจ๋ ํผ์ํฝ๊ณผ LG์ํ๊ฑด๊ฐ์ด
๊ฑฐ์ก์ ๋ชจ๋ธ๋ฃ๋ฅผ ์ ์ํ๋ฉฐ ์น์ดํ โ๊นํํฌ ์ํ์ โ์ ๋ฒ์ด๊ธฐ๋ ํ๋ค.ํ์ฅํ์
๊ณ ๊ด๊ณ์๋ โํ์ฅํ ๋ธ๋๋๊ฐ ๋ง์์ง๋ฉด์ ๋ชจ๋ธ ๊ณ์ฝ์ ํ ์ฐ์์ธ์ด
โ๋์ด ๋ฌ๋คโ๋ ์๊ธฐ๊ฐ ๋์จ ์ง ์ค๋โ๋ผ๋ฉฐ โ1๋
์ํ์ ๋จ๋ฐ๊ณ์ฝ์ด ๋๋ถ๋ถ์ด๋ผ ํ ๋ธ๋๋์์ ์ฅ์๋ชจ๋ธ๋ก ํ๋ํ๋ ๊ฒ์ ๋๋จํ ์ด๋ ค์ด ์ผโ์ด๋ผ๊ณ
๋งํ๋ค.
- ๊ฒฝ๊ธฐ ์ฉ์ธ์๋ ์ฑ๋จ ๋ถ๋น์ ๋์์ ๊ฐ๊น์ด ์ง๋ฆฌ์ ์ด์ ๋๋ถ์ 2000๋
๋ ์ค๋ฐ โ๋ฒ๋ธ์ธ๋ธโ์ผ๋ก ๋ถ๋ฆฌ๋ฉฐ ์๋๊ถ ์ฃผํ์์ฅ์ ์ฃผ๋ํ์ง๋ง ๊ณผ์๊ณต๊ธ๊ณผ
2008๋
๊ธ์ต์๊ธฐ ์ฌํ๋ก ๋ฏธ๋ถ์์ด ๊ธ์ฆํ๋ฉด์ โ๋ถ ๊บผ์ง ์งโ์ด ์์ถํ๋ค. ์๋๊ถ ๋ด ๋ํ์ ์ธ ๋ฏธ๋ถ์ ์ง์ญ์ผ๋ก ๊ผฝํ๋ค.๊ทธ๋ฌ๋ ์ฉ์ธ ์ง์ญ ๋ถ์๊ธฐ๊ฐ
๋ฌ๋ผ์ก๋ค. ์๋๊ถ ์ ์ธ๋์ผ๋ก ๋งค๋งค ์ ํ ์์๊ฐ ๋๋ฉด์ ์ง๋ํด 1๋ง9055๊ฐ๊ตฌ์ ์ํํธ๊ฐ ๊ฑฐ๋๋ผ ์์์(2๋ง280๊ฐ๊ตฌ)์ ์ด์ด ์๋๊ถ ์ํํธ
๊ฑฐ๋๋ 2์์ ์ฌ๋๋ค. ์ฒญ์ฝ ์ด๊ธฐ๋ ๋ฌ์์ฌ๋ผ ์ง๋์ฃผ ๋ถ์ํ ํ๋์ฒ๋ โeํธํ์ธ์ ์์งโ๋ ํ๊ท 8.29 ๋ 1๋ก 1์์์์ ๋ง๊ฐ๋๋ค.์ฉ์ธ์์
์ต๊ทผ ๊ฐ์ฅ ์ฃผ๋ชฉ๋ฐ๊ณ ์๋ ๊ณณ์ ์ฒ์ธ๊ตฌ ์ญ๋ถ์ง๊ตฌ๋ค. ์ฉ์ธ์์ฒญ๊ณผ ์ฉ์ธ๊ต์ก์ฒญ, ์ฉ์ธ๋๋ถ๊ฒฝ์ฐฐ์ ๋ฑ์ด ์
์ฃผํ ์ฉ์ธํ์ ํ์ด๊ณผ ๊ฐ๊น๊ณ ์ธ๊ทผ ์ญ์ผ์ง๊ตฌ์ ํจ๊ป
1๋ง์ฌ๊ฐ๊ตฌ ๋๊ท๋ชจ ์ฃผ๊ฑฐ๋จ์ง๋ก ๊ฐ๋ฐ๋๊ณ ์๋ค. ์์ง์ ๋๋ฐฑ์ ์ด์ด ์ฉ์ธ์ ๋ํํ๋ ์ ํฅ ์ฃผ๊ฑฐ์ง๋ก ๋ ์ค๋ฅธ ์ญ๋ถ์ง๊ตฌ์์ ์ฐ๋ฏธ๊ฑด์ค์ด ์ด๋ฌ 1260๊ฐ๊ตฌ
๊ท๋ชจ์ โ์ฐ๋ฏธ๋ฆฐ ์ผํธ๋ดํํฌโ๋ฅผ ๋ถ์ํ๋ค.โ๋
น์ง์จ 40%์ 1260๊ฐ๊ตฌ ๋๋จ์ง์ฒ์ธ๊ตฌ์์ ๊ฐ์ฅ ๋์ 34์ธต ์ํํธ๋ก 1260๊ฐ๊ตฌ ๋ชจ๋ ์ ์ฉ 59ยท75ยท84ใก
์ค์ํ์ผ๋ก ๊ตฌ์ฑ๋๋ค. ๋ชจ๋ ๊ฐ๊ตฌ๋ฅผ ๋จํฅ ์์ฃผ๋ก ์ค๊ณํ๊ณ ๊ฑดํ์จ(๋์ง ๋ฉด์ ๋๋น ๊ฑด๋ฌผ ๋ฐ๋ฅ ๋ฉด์ ๋น์จ)์ด 12.8%์ ๋ถ๊ณผํด ๋
น์ง์จ์ด 40%์
๋ฌํ๋ค. ๊ทผ๋ฆฐ๊ณต์ ์ด๋ฆฐ์ด๊ณต์๊ณผ ๋ง๋ฟ์ ์๊ณ ํจ๋ฐ์ฐ๋ ๋ผ๊ณ ์์ด โ๋
น์ง ํ๋ฆฌ๋ฏธ์ ๋จ์งโ๋ก ํ๊ฐ๋ฐ๋๋ค.์ญ๋ถ์ง๊ตฌ๋ ์ฉ์ธ ์๋ด๋ ๋ฌผ๋ก ์์ธ๋ก์ ์ด๋์ด
์ฝ๋ค. ๊ฑธ์ด์ ๊ฐ ์ ์๋ ์ฉ์ธ ๊ฒฝ์ ์ฒ ๋ช
์ง๋์ญ์ ์ด์ฉํด ๋ถ๋น์ ๊ธฐํฅ์ญ์์ ํ์นํ๋ฉด ์์ธ ๊ฐ๋จ๊ถ๊น์ง ์ฝ 50๋ถ์ด๋ฉด ๋์ฐฉํ ์ ์๋ค. 2017๋
๊ฐํต ์์ ์ธ ๊ตญ๋ 42ํธ์ ๋์ฒด ์ฐํ๋๋ก(์์์ ๊ฐIC ๋ฐฉ๋ฉด)๋ฅผ ์ด์ฉํ๋ฉด ๊ฒฝ๋ถ๊ณ ์๋๋ก ๊ธฐํฅIC์ ์์IC๊น์ง ๊ฑฐ๋ฆฌ๋ 12ใ ์ ๋๋ก ์ค์ด๋ ๋ค.
๋จ์ง ๋ฐ๋ก ์์ ์ด๋งํธ๊ฐ ๋ฌธ์ ์ด๊ณ ์ด๋ฑํ๊ต๊ฐ ๋ค์ด์ค ์์ ์ด๋ค. ์ฉ์ ์คํ๊ต์ ์ฉ์ธ๊ณ ๋ฑํ๊ต๋ ๊ฐ๊น๋ค.โ์ค์ ํํ โํ์ ์ค๊ณโ ๋์
์ค์ํ ํนํ
์ค๊ณ๋ ๋์ ๋๋ค. ๋ชจ๋ ๊ฐ๊ตฌ ์ฃผ๋ฐฉ์ ์ฃผ๋ถ์ ๋์ ์ ์ต์ํํ๋ โใทโ์ ํํ๋ก ๋ฐฐ์นํ๋ค. ์ ์ฉ 59ใก(Aํ์
)์๋ 3๊ฐ ์นจ์ค์ ๋ชจ๋ ์๋ฉ๊ณต๊ฐ์
์ค์นํ๋ค. ์ ์ฉ 75ใก์๋ ํ๊ด ์์ ์
๊ตฌ๋ฅผ ๋์ธ โ์ํฌ์ธ ์๋ฉ๊ณต๊ฐโ์ ์ ๊ณตํ๋ค. ์ ์ฉ 84ใก ์ผ๋ถ ํ์
์๋ ์ฃผ๋ฐฉ ๋ํ ์๋ฉ๊ณต๊ฐ์ด๋ ์์
๊ณต๊ฐ์ผ๋ก
ํ์ฉํ ์ ์๋ ๋ํ ์ฃผ๋ฐฉ์ ๋ค์ฌ ๊ณต๊ฐ ํ์ฉ๋๋ฅผ ๋์๋ค. ์ผ๋ถ ๊ฐ๊ตฌ์ ๋ ์ ์ฉํ ๋ฑ์ ๋ณด๊ดํ ์ ์๋ ์งํ ์ฐฝ๊ณ ๋ ์ ๊ณตํ๋ค. ์์ ํ๊ฒ ํ๋ฐฐ๋ฅผ
๋ฐ์กยท์๋ นํ ์ ์๋ ๋ฌด์ธํ๋ฐฐ์์คํ
๋ ๊ฐ์ถ ๊ณํ์ด๋ค.๋ฐฉ๋ฌธํ ์น์ธ์ฒ ๋ฑ์ด ๋จธ๋ฌด๋ฅด๊ฑฐ๋ ๊ธฐ๋
์ผ ํํฐ๊ณต๊ฐ์ผ๋ก ์ด์ฉํ ์ ์๋ ๊ฒ์คํธํ์ฐ์ค๋ฅผ ์ค์นํ๋ค.
์
์ฃผ์ ํด์๊ณต๊ฐ์ธ โ์นดํ ๋ฆฐโ๋ ๋ง๋ จํ๋ค. ์ด๋ฆฐ ์๋
๋ค์ด ํตํ๋ฒ์ค๋ฅผ ์์ ํ๊ฒ ๊ธฐ๋ค๋ฆด ์ ์๋๋ก ์ค์ฟจ๋ฒ์ค์กด์ ์ค์นํ๊ณ ๋จ๋
๊ตฌ๋ถ์ด ์๋ ๋
์์ค๋
๋ฌธ์ ์ฐ๋ค. ์ค๋ด๊ณจํ์ฐ์ต์ฅ๊ณผ ํผํธ๋์ค์ผํฐ, ์ค์์ค ๋ฑ ์ปค๋ฎค๋ํฐ์์ค๋ ๋ง๋ จํ๋ค. ๋ชจ๋ธํ์ฐ์ค๋ ์ฉ์ธ์ ์ญ์ผ๋ ์ฃผ๋ฏผ์ผํฐ ์์ ๋ฌธ์ ์ฐ๋ค. ๊น๋ณดํ
๊ธฐ์/๊นํ๋ ํ๊ฒฝ๋ท์ปด ๊ธฐ์ [email protected]
- source_sentence: ํ์ ๊ณต์ ์์ ์ถ๊ฐ ๋น์ฉ ๋ฐ์์ด ์์๋๋ ์ค๋น๋ฅผ ์ฃผ๋ฌธํ ๋๋ผ๋?
sentences:
- ์ผ์ฑ์ค๊ณต์
์ด ์ง๋ 1๋ถ๊ธฐ์ ๋๊ท๋ชจ ์ ์๋ฅผ ๋๋ค. ํด์ํ๋ํธ ํ๋ก์ ํธ์ ์ ์ฌ์ ์์ค์ ๋๋นํด ๋๊ท๋ชจ ์ถฉ๋น๊ธ์ ์์๊ธฐ ๋๋ฌธ์ด๋ค. โถ๋ณธ์ง 4์23์ผ์
A13๋ฉด ์ฐธ์กฐ ์ผ์ฑ์ค๊ณต์
์ 1๋ถ๊ธฐ์ ๋งค์ถ 3์กฐ4311์ต์, ์์
์์ค 3625์ต์, ๋น๊ธฐ์์์ค 2724์ต์์ ๊ธฐ๋กํ๋ค๊ณ 25์ผ ๊ณต์ํ๋ค. ์๋
1๋ถ๊ธฐ์ 4402์ต์์ ์์
์ด์ต๊ณผ 3005์ต์์ ๋น๊ธฐ์์ด์ต์ ๋๋ ๊ฒ๊ณผ ๋น๊ตํ๋ฉด ํฐ ํญ์ผ๋ก ์ ์์ ํํ๋ค. ๋งค์ถ์ ์ ๋
๋๊ธฐ ๋๋น 11.7%
๊ฐ์ํ์ ๋ฟ์ธ๋ฐ๋ ์ด์ต์ด ํฌ๊ฒ ์ค์ด๋ ์ด์ ๋ ํด์ํ๋ํธ ํ๋ก์ ํธ ์์ค์ ๋๋นํด ์ฝ 5000์ต์์ ์ถฉ๋น๊ธ์ ์์๊ธฐ ๋๋ฌธ์ด๋ผ๊ณ ํ์ฌ ์ธก์ ์ค๋ช
ํ๋ค.
์์ ์ง๋ 2์๋ถํฐ ์ผ์ฑ์ค๊ณต์
์ ํด์ํ๋ํธ ํ๋ก์ ํธ์ ๊ด๋ จํด ๊ฒฝ์์ง๋จ์ ์งํํ ์ผ์ฑ๊ทธ๋ฃน ์ปจํธ๋กคํ์์ธ ๋ฏธ๋์ ๋ต์ค์ ๋๊ท๋ชจ ๋ถ์ค์ด ์๋ค๊ณ ํ๋จํ๊ณ
์ถฉ๋น๊ธ์ ์๋๋ก ํ๋ค. ์ผ์ฑ์ค๊ณต์
๊ด๊ณ์๋ โ2012๋
์ ์์ฃผํ ํธ์ฃผ ์ธํ์คํ๋ก์ ํธ์ ์ต์์ค(Ichthys) ํด์๊ฐ์ค์ฒ๋ฆฌ์ค๋น(CPF)์ ์ง๋ํด
์์ฃผํ ๋์ด์ง๋ฆฌ์ ์์ง๋(Egina) ๋ถ์ ์ ์์ฐ์ ์ฅํ์ญ์ค๋น(FPSO) ๋ฑ 2๊ฑด์ ํด์ํ๋ํธ ๊ณต์ฌ์์ ์์ค์ด ์์๋๋คโ๊ณ ๋งํ๋ค. ๊ทธ๋ โ์ธํ์คํ๋ก์ ํธ์
CPF๋ ์์ธ์ค๊ณ ๋ฑ ํ์ ๊ณต์ ์์ ์ฌ์์ด ๋ฐ๋๋ฉด์ ์์
๋ฌผ๋๊ณผ ๋น์ฉ์ด ์ฆ๊ฐํ์ผ๋ฉฐ, FPSO๋ ๋์ด์ง๋ฆฌ์ ํ์ง์์ ์์ฐ ๋น์ฉ์ด ๋์ด๋ ๊ฒ์ผ๋ก
๋ณด์ธ๋คโ๊ณ ๋ง๋ถ์๋ค. ์ผ์ฑ์ค๊ณต์
์ 2๊ฑด์ ํด์ํ๋ํธ ํ๋ก์ ํธ ์ธ์ ๋ค๋ฅธ ํ๋ก์ ํธ๋ ์ ์์ ์ผ๋ก ์งํ๋๊ณ ์๋ค๊ณ ๋ฐํ๋ค. ํ์ฌ ๊ด๊ณ์๋ โ์์
์์ค์ 1๋ถ๊ธฐ์ ๋ฐ์ํ ๋งํผ 2๋ถ๊ธฐ๋ถํฐ๋ ๊ฒฝ์ ์ค์ ์ด ์ ์ ์์ค์ผ๋ก ํ๋ณตํ ๊ฒโ์ด๋ผ๊ณ ๋ด๋ค๋ดค๋ค.์ผ์ฑ์ค๊ณต์
์ ์ด๋ ์ค์ ์ ๋ง ๊ณต์๋ฅผ ํตํด ์ฌํด
๋งค์ถ์ด 14์กฐ6000์ต์, ๋ฒ์ธ์ธ ๋น์ฉ ์ฐจ๊ฐ ์ ์์ด์ต์ด 2000์ต์ ์ ๋์ผ ๊ฒ์ด๋ผ๊ณ ๋ฐํ๋ค.
- ์ฐจ์
๊ธ ๊ฐ๊ธฐ๊ฐ ๋ฒ
์ฐฌ ํ๊ณ๊ธฐ์
๊ฐ์ด๋ฐ ๋๊ธฐ์
์ด ๋๋ฉด์ ๋ถ์ค์ํ์ โ๋ํํโํ๊ณ ์๋ค๋ ๊ฒฝ๊ณ ๊ฐ ๋์๋ค. ๋๊ธฐ์
๋ถ์ค์ด ํ์ค๋ก ๋ฅ์น ๊ฒฝ์ฐ ์ ์ฒด ์๊ธ์์ฅ์
๋ถ์์ผ๋ก ๋ฒ์ง ์ ์๋ค๋ ์ฐ๋ ค๋ค. LG๊ฒฝ์ ์ฐ๊ตฌ์์ 3์ผ โ๋ถ์ค์ํ ๊ธฐ์
์ ๋ํํ๊ฐ ๊ธ์ตํ์ฌ ๊ฑด์ ์ฑ์ ๋จ์ด๋จ๋ฆฌ๊ณ ์๋คโ๋ ์ ๋ชฉ์ ๋ณด๊ณ ์์์ ๊ตญ๋ด
๊ธ์ตํ์ฌ์ ๋ถ์ค์์ฐ ๊ท๋ชจ๊ฐ ์ฌ ๋ค์ด ์ง๋ 9์ ๋ง๊น์ง 6์กฐ8000์ต์ ๋์ด๋ 39์กฐ8000์ต์์ ๋ฌํ๋ค๋ฉฐ ์ด๊ฐ์ด ๋ถ์ํ๋ค. ์ดํ๋ ์ฐ๊ตฌ์์์
โ์ฌ ๋ค์ด ์ฆ๊ฐํ ๋ถ์ค์์ฐ์ ๋๋ถ๋ถ ์ํ์์ ๋ฐ์ํ๋๋ฐ ๋๊ธฐ์
๋์ถ์ด ํนํ ๋ฌธ์ ๊ฐ ๋๋คโ๊ณ ์ค๋ช
ํ๋ค. ์ํ ๋ถ๋ฌธ์ ๊ฒฝ์ฐ ๋๊ธฐ์
์ ๋ถ์ค์ฑ๊ถ ์ฆ๊ฐํญ์
์ฌ ๋ค์ด 9์๊น์ง 8์กฐ5000์ต์์ ๋ฌํด ์ง๋ํด ๊ฐ์ ๊ธฐ๊ฐ์ 3์กฐ2000์ต์์ ํจ์ฌ ์๋์๋ค. ๊ฐ์ ๊ธฐ๊ฐ ์ค์๊ธฐ์
์ ๋ถ์ค์ฑ๊ถ ์ฆ๊ฐํญ์ 10์กฐ4000์ต์์ผ๋ก
์ ๋
๋๊ธฐ์ ๋์ผํ๋ค. ๋ณด๊ณ ์๋ ์ฌ ๋ค์ด ๋๊ธฐ์
์ ๋ถ์ค ์ ๋๊ฐ ์ปค์ง๊ณ ์๋ค๋ฉฐ ์ค์๊ธฐ์
์ ๊ธ๋ก๋ฒ ๊ธ์ต์๊ธฐ ๋น์ ๊ตฌ์กฐ์กฐ์ ์ด ์๋นํ ์งํ๋ ๋ฐ๋ฉด
๋๊ธฐ์
์ ์ต๊ทผ์์ผ ๋ถ์ค์ด ํ์คํ๋๊ธฐ ์์ํ๊ธฐ ๋๋ฌธ์ด๋ผ๊ณ ๋ถ์ํ๋ค. ์ด์๋ณด์๋ฐฐ์จ 1์ ๋ฐ๋์ ์์
์ด์ต์ผ๋ก ์ด์๋ ๊ฐ์ง ๋ชปํ๋ ํ๊ณ๊ธฐ์
์ ์ดํด๋ด๋
๋ํํ ์ถ์ธ๊ฐ ๋๋๋ฌ์ก๋ค. ์ ์ฒด ์์ฅ๊ธฐ์
์ ์ฐจ์
๊ธ ๊ฐ์ด๋ฐ ํ๊ณ๊ธฐ์
์ฐจ์
๊ธ์ด ์ฐจ์งํ๋ ๋น์ค์ 2005๋
13.3%์์ ์ฌํด ์๋ฐ๊ธฐ 34.0%๋ก
ํ๋๋๋ค. ํ๊ณ๊ธฐ์
์ ํ๊ท ์ฐจ์
๊ธ์ด ๊ฐ์ ๊ธฐ๊ฐ 1270์ต์์์ 6799์ต์์ผ๋ก 5.4๋ฐฐ ๋ด ๋ฐ ๋ฐ๋ฅธ ๊ฒ์ด๋ค. ํ๊ณ๊ธฐ์
์ ์ฐจ์
๊ธ ๊ฐ์ด๋ฐ ๋๊ธฐ์
์ด
์ฐจ์งํ๋ ๋น์ค์ด 93.2%์์ 99.1%๊น์ง ์น์์ผ๋ฉด์ ๊ฐ๋ณ ๋ถ์ค์ ๋ฉ์น ์์ฒด๊ฐ ์ปค์ก๋ค. ์ด ์ฐ๊ตฌ์์์ โ์์ฅ์ฌ ๊ฐ์ด๋ฐ ํ๊ณ๊ธฐ์
์ ์ฐจ์
๊ธ์
๋๋ถ๋ถ ๋๊ธฐ์
์ด ๊ฐ๊ณ ์๋ ์
โ์ด๋ผ๋ฉฐ โ1๊ฐ ๋๊ธฐ์
์ ๋ถ์ค์ 25๊ฐ ์ค์๊ธฐ์
์ ๋ถ์ค๊ณผ ๋น์ทํ ์ ๋๋ก ์์ฅ์ ๋ฏธ์น๋ ์ํฅ์ด ํฌ๋ค๋ ๊ฒ ๋ฌธ์ โ๋ผ๊ณ
์ฐ๋ คํ๋ค.๋ณด๊ณ ์๋ ์ํ์ ์ต์ํํ๋ ค๋ฉด ์ ์ ์ ์ธ ๊ตฌ์กฐ์กฐ์ ์ด ํด๋ต์ด๋ผ๋ฉฐ ๋ถ์ค ๊ฐ๋ฅ์ฑ์ด ๋์ ๊ธฐ์
์ ์ ๋ณํด ์ถ๊ฐ์ ์ธ ์๊ธ ๊ณต๊ธ์ ์ต์ ํด์ผ ๋ถ์ค ํ์ฐ์
๋ง์ ์ ์๋ค๊ณ ์ง์ ํ๋ค.
- "1967๋
์ 3์ฐจ ์ค๋ ์ ์์์ ์ด์ค๋ผ์์ ์๋์ ์ธ ์น๋ฆฌ์ ์ด์ด ์๋ ๋จ์ฒด๋ค์ ๋ค์๋ ์ํ ๋ฅผ ํ๋ณตํ๊ณ ๋ค๋ฅธ ๋ชฉ์ ๋ค์ ์ถ์งํ๋ ๋ฐ ์ ํต์ ์ธ\
\ ๊ฐ ์ฃผ ๊ฐ์ ๊ต์ ์ํ๋ก ์์ ํ์ผ๋ค์ ์ฐพ๊ณ ์์๋ค. ์ด์ค๋ผ์์ ๋ํ์ด์ ์ผ๋ก ํ๋ ์คํ์ธ์ ํผ๋ค์ธ ๊ฒ๋ฆด๋ผ๋ค์ ์ํ์ฌ ๊ตญ๊ฒฝ์ ๊ฑด๋๋ ๊ณต๊ฒฉ๋ค์\
\ ์ํ์ฌ ํ๊ฒฉ๋์๋ค.\n\n1970๋
9์ 1์ผ ๊ตญ์์ ์์ดํ๋ ๋ฐ ๋ช๋ช์ ์๋๋ค์ด ์คํจํ์๋ค. 9์ 6์ผ ํ๋ ์คํ์ธ ํด๋ฐฉ๋์ค์ ์ ์ ๋ฉ์น\
\ ์ฌ๊ฑด๋ค์ ์ฐ์๋ค์์ 3๋์ ํญ๊ณต๊ธฐ๊ฐ ๊ทธ๋ค์ ์ํ์ฌ ๋ฉ์น๋์๋ ๋ฐ ์๋ฅด์นด์ ์๋ฅํ ์ค์์ค ํญ๊ณต๊ณผ TWA ํญ๊ณต, ๊ทธ๋ฆฌ๊ณ ์นด์ด๋ก์ ์๋ฅํ ํฌ์๋ฉ๋ฆฌ์นธ\
\ ํญ๊ณต์ด์๋ค. 9์ 9์ผ ๋น์ ๋ฐ๋ ์ธ์ผ๋ก๋ถํฐ ์๊ตญํด์ธํญ๊ณต ํญ๊ณต๊ธฐ๋ ๋ํ ์๋ฅด์นด๋ก ๋ฉ์น๋์๋ค. ์ ๋ถ์ ์ธ์ง๋ค์ด ์ฎ๊ฒจ์ง ํ, ํญ๊ณต๊ธฐ๋ค์ ์ง์์ ์ผ๋ก\
\ ํ
๋ ๋น์ ์นด๋ฉ๋ผ๋ค ์์ ํญ๋ฐ๋์๋ค. ๊ตญ์์ ์ง์ ๋ง์ ํ๋๊ฒ ํ ๋ฐ๋์๋ค์ ์ด๋ฅด๋น๋ ์ง์ญ์ \"ํด๋ฐฉ๋ ์ง๋ฐฉ\"์ผ๋ก ์ ์ธํ์๋ค.\n\n9์\
\ 16์ผ ํ์ธ์ธ ๊ตญ์์ ๊ณ์๋ น์ ์ ํฌํ์๋ค. ์ด์ด์ง ๋ ์๋ฅด๋จ์ ํฑํฌ๋ค์ ์๋ง์ ์๋ ํ๋ ์คํ์ธ์ ๊ธฐ๊ตฌ๋ค์ ๋ณธ๋ถ๋ค์ ๊ณต๊ฒฉํ์๊ณ , ์ก๊ตฐ์ ๋ํ\
\ ์๋ฅด์นด, ์ด๋ฅด๋น๋, ์ดํธ์ ์ค์จ์ผ๋ ์ ์๋ ์ง์๋ค์ ๊ณต๊ฒฉํ๊ธฐ๋ ํ์๋ค.\n\n1970๋
9์์ ๊ฒ์ 9์๋ก ์๋ ค์ก์ผ๋ฉฐ ์ด์ฉ๋ค \"ํ์ธ์ ์ธ\
\ ์ฌ๊ฑด๋ค์ ์๊ธฐ\"๋ก์ ์ธ๊ธ๋์๋ค. ๊ทธ ์ผ์ 34์ธ์ ๊ตฐ์ฃผ๊ฐ ์ฑ๊ณต์ ์ผ๋ก ์์ ์ ์์ ์ ํ๋ํ๋ ์๋๋ค์ ์ง์ํ ํํด์๋ค. ํญ๋ ฅ์ ์์ชฝ์ผ๋ก๋ถํฐ\
\ 7์ฒ์์ 8์ฒ์ ์ฌ๋ง์ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ์ ธ์๋ค. ๋ฌด์ฅํ ๋ถ์์ ํ๋ ์คํ์ธ ํด๋ฐฉ ๊ธฐ๊ตฌ์ ์์ฒ๋ช
์ ํ๋ ์คํ์ธ์ธ๋ค์ ๋ ๋ฐ๋
ผ์ผ๋ก ๋ฐฐ์ ์ ํจ๊ป 1971๋
\
\ 7์๊น์ง ์ง์๋์๋ค. \n\n๊ฒฐ๊ณผ๋ก์ ํ์ธ์ธ์ด ์กฐ๊ตญ์์ ์ธ๊ธฐ๋ฅผ ์ ์งํ์์ด๋ ์๋ ์ธ๊ณ๋ 10๋
๊ฐ ์ธ์์ ๋๋จธ์ง๋ฅผ ํตํ์ฌ ๊ทธ๋ฅผ ํฌ๊ฒ ๊ณ ๋ฆฝ์์ผฐ๋ค.\
\ 1974๋
์๋ ์ง๋์๋ค์ ํ๋ ์คํ์ธ ํด๋ฐฉ ๊ธฐ๊ตฌ๋ฅผ \"ํ๋ ์คํ์ธ ๊ตญ๋ฏผ์ ๋จ ํ๋์ ํฉ๋ฒ์ ์ธ ๋ํ\"๋ก ์ ์ธํ์ฌ ์๋ฅด๋จ๊ฐ ์์ ์ง๊ตฌ์ ํ๋ ์คํ์ธ์ธ๋ค์\
\ ์ํ ์ฐ์ค์๋ก์ ํ์ธ์ธ์ ์ญํ ์ ๊ฐ์ ธ๊ฐ๋ค.\n\n์ง๋ฏธ ์นดํฐ ๋ฏธ๊ตญ ๋ํต๋ น, ์์๋ฅด ์ฌ๋คํธ ์ด์งํธ ๋ํต๋ น๊ณผ ๋ฉ๋ํด ๋ฒ ๊ธด ์ด์ค๋ผ์ ์ด๋ฆฌ ์ฌ์ด์\
\ 1978๋
์บ ํ๋ฐ์ด๋น๋ ํ์ ์ ์๋ฅด๋จ์ ํ์ธ์ธ ๊ตญ์์ ๋ค์ด์ค์ง ๋ชปํ๊ฒ ํ์๋ค. ์ด์ด์ง ํด ํ์ธ์ธ ๊ตญ์์ ์ ์ ์ดํ ์ฐ์ค์์ ํ์ ์ ๋น๋ํ์๋ค.\
\ ์ด ์
์ฅ์ ๊ทธ์ ์กฐ๊ตญ์ด ํ์ํ๋ ๋ค๋ฅธ ์๋ ์ง๋์๋ค๊ณผ ์ฐํธ๋ฅผ ์ฌ์ค๋ฆฝํ๋ ๋์์ ์ฃผ์๋ค. \n\n ํ์ธ์ธ์ ํ๋ ์คํ์ธ ํด๋ฐฉ ๊ธฐ๊ตฌ์ ์ง๋์\
\ ์ผ์ธ๋ฅด ์๋ผํํธ์ ํํด์์ ์ ํ ์ฑ๊ณต์ ์ด์ง ์์๊ณ , ๊ฒฐ๊ตญ 1988๋
์๋ฅด๋จ๊ฐ ์์ ์ง๊ตฌ์ ํ์ ์ ๊ณผ ๋ฒ์ ์ ํต์น๋ก ์๋ฅด๋จ์ ์ฃผ์ฅ์ ํฌ๊ธฐํ์๋ค."
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
model-index:
- name: SentenceTransformer based on byKim93/klue-roberta-base-klue-sts-2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: Unknown
type: unknown
metrics:
- type: pearson_cosine
value: 0.8517344970710515
name: Pearson Cosine
- type: spearman_cosine
value: 0.8454245670475068
name: Spearman Cosine
---
# SentenceTransformer based on byKim93/klue-roberta-base-klue-sts-2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [byKim93/klue-roberta-base-klue-sts-2](https://huggingface.co/byKim93/klue-roberta-base-klue-sts-2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [byKim93/klue-roberta-base-klue-sts-2](https://huggingface.co/byKim93/klue-roberta-base-klue-sts-2) <!-- at revision c7b29abd6e3ab6122a07dcb926dc11d4e38cb572 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'ํ์ ๊ณต์ ์์ ์ถ๊ฐ ๋น์ฉ ๋ฐ์์ด ์์๋๋ ์ค๋น๋ฅผ ์ฃผ๋ฌธํ ๋๋ผ๋?',
'์ผ์ฑ์ค๊ณต์
์ด ์ง๋ 1๋ถ๊ธฐ์ ๋๊ท๋ชจ ์ ์๋ฅผ ๋๋ค. ํด์ํ๋ํธ ํ๋ก์ ํธ์ ์ ์ฌ์ ์์ค์ ๋๋นํด ๋๊ท๋ชจ ์ถฉ๋น๊ธ์ ์์๊ธฐ ๋๋ฌธ์ด๋ค. โถ๋ณธ์ง 4์23์ผ์ A13๋ฉด ์ฐธ์กฐ ์ผ์ฑ์ค๊ณต์
์ 1๋ถ๊ธฐ์ ๋งค์ถ 3์กฐ4311์ต์, ์์
์์ค 3625์ต์, ๋น๊ธฐ์์์ค 2724์ต์์ ๊ธฐ๋กํ๋ค๊ณ 25์ผ ๊ณต์ํ๋ค. ์๋
1๋ถ๊ธฐ์ 4402์ต์์ ์์
์ด์ต๊ณผ 3005์ต์์ ๋น๊ธฐ์์ด์ต์ ๋๋ ๊ฒ๊ณผ ๋น๊ตํ๋ฉด ํฐ ํญ์ผ๋ก ์ ์์ ํํ๋ค. ๋งค์ถ์ ์ ๋
๋๊ธฐ ๋๋น 11.7% ๊ฐ์ํ์ ๋ฟ์ธ๋ฐ๋ ์ด์ต์ด ํฌ๊ฒ ์ค์ด๋ ์ด์ ๋ ํด์ํ๋ํธ ํ๋ก์ ํธ ์์ค์ ๋๋นํด ์ฝ 5000์ต์์ ์ถฉ๋น๊ธ์ ์์๊ธฐ ๋๋ฌธ์ด๋ผ๊ณ ํ์ฌ ์ธก์ ์ค๋ช
ํ๋ค. ์์ ์ง๋ 2์๋ถํฐ ์ผ์ฑ์ค๊ณต์
์ ํด์ํ๋ํธ ํ๋ก์ ํธ์ ๊ด๋ จํด ๊ฒฝ์์ง๋จ์ ์งํํ ์ผ์ฑ๊ทธ๋ฃน ์ปจํธ๋กคํ์์ธ ๋ฏธ๋์ ๋ต์ค์ ๋๊ท๋ชจ ๋ถ์ค์ด ์๋ค๊ณ ํ๋จํ๊ณ ์ถฉ๋น๊ธ์ ์๋๋ก ํ๋ค. ์ผ์ฑ์ค๊ณต์
๊ด๊ณ์๋ โ2012๋
์ ์์ฃผํ ํธ์ฃผ ์ธํ์คํ๋ก์ ํธ์ ์ต์์ค(Ichthys) ํด์๊ฐ์ค์ฒ๋ฆฌ์ค๋น(CPF)์ ์ง๋ํด ์์ฃผํ ๋์ด์ง๋ฆฌ์ ์์ง๋(Egina) ๋ถ์ ์ ์์ฐ์ ์ฅํ์ญ์ค๋น(FPSO) ๋ฑ 2๊ฑด์ ํด์ํ๋ํธ ๊ณต์ฌ์์ ์์ค์ด ์์๋๋คโ๊ณ ๋งํ๋ค. ๊ทธ๋ โ์ธํ์คํ๋ก์ ํธ์ CPF๋ ์์ธ์ค๊ณ ๋ฑ ํ์ ๊ณต์ ์์ ์ฌ์์ด ๋ฐ๋๋ฉด์ ์์
๋ฌผ๋๊ณผ ๋น์ฉ์ด ์ฆ๊ฐํ์ผ๋ฉฐ, FPSO๋ ๋์ด์ง๋ฆฌ์ ํ์ง์์ ์์ฐ ๋น์ฉ์ด ๋์ด๋ ๊ฒ์ผ๋ก ๋ณด์ธ๋คโ๊ณ ๋ง๋ถ์๋ค. ์ผ์ฑ์ค๊ณต์
์ 2๊ฑด์ ํด์ํ๋ํธ ํ๋ก์ ํธ ์ธ์ ๋ค๋ฅธ ํ๋ก์ ํธ๋ ์ ์์ ์ผ๋ก ์งํ๋๊ณ ์๋ค๊ณ ๋ฐํ๋ค. ํ์ฌ ๊ด๊ณ์๋ โ์์ ์์ค์ 1๋ถ๊ธฐ์ ๋ฐ์ํ ๋งํผ 2๋ถ๊ธฐ๋ถํฐ๋ ๊ฒฝ์ ์ค์ ์ด ์ ์ ์์ค์ผ๋ก ํ๋ณตํ ๊ฒโ์ด๋ผ๊ณ ๋ด๋ค๋ดค๋ค.์ผ์ฑ์ค๊ณต์
์ ์ด๋ ์ค์ ์ ๋ง ๊ณต์๋ฅผ ํตํด ์ฌํด ๋งค์ถ์ด 14์กฐ6000์ต์, ๋ฒ์ธ์ธ ๋น์ฉ ์ฐจ๊ฐ ์ ์์ด์ต์ด 2000์ต์ ์ ๋์ผ ๊ฒ์ด๋ผ๊ณ ๋ฐํ๋ค.',
'์ฐจ์
๊ธ ๊ฐ๊ธฐ๊ฐ ๋ฒ
์ฐฌ ํ๊ณ๊ธฐ์
๊ฐ์ด๋ฐ ๋๊ธฐ์
์ด ๋๋ฉด์ ๋ถ์ค์ํ์ โ๋ํํโํ๊ณ ์๋ค๋ ๊ฒฝ๊ณ ๊ฐ ๋์๋ค. ๋๊ธฐ์
๋ถ์ค์ด ํ์ค๋ก ๋ฅ์น ๊ฒฝ์ฐ ์ ์ฒด ์๊ธ์์ฅ์ ๋ถ์์ผ๋ก ๋ฒ์ง ์ ์๋ค๋ ์ฐ๋ ค๋ค. LG๊ฒฝ์ ์ฐ๊ตฌ์์ 3์ผ โ๋ถ์ค์ํ ๊ธฐ์
์ ๋ํํ๊ฐ ๊ธ์ตํ์ฌ ๊ฑด์ ์ฑ์ ๋จ์ด๋จ๋ฆฌ๊ณ ์๋คโ๋ ์ ๋ชฉ์ ๋ณด๊ณ ์์์ ๊ตญ๋ด ๊ธ์ตํ์ฌ์ ๋ถ์ค์์ฐ ๊ท๋ชจ๊ฐ ์ฌ ๋ค์ด ์ง๋ 9์ ๋ง๊น์ง 6์กฐ8000์ต์ ๋์ด๋ 39์กฐ8000์ต์์ ๋ฌํ๋ค๋ฉฐ ์ด๊ฐ์ด ๋ถ์ํ๋ค. ์ดํ๋ ์ฐ๊ตฌ์์์ โ์ฌ ๋ค์ด ์ฆ๊ฐํ ๋ถ์ค์์ฐ์ ๋๋ถ๋ถ ์ํ์์ ๋ฐ์ํ๋๋ฐ ๋๊ธฐ์
๋์ถ์ด ํนํ ๋ฌธ์ ๊ฐ ๋๋คโ๊ณ ์ค๋ช
ํ๋ค. ์ํ ๋ถ๋ฌธ์ ๊ฒฝ์ฐ ๋๊ธฐ์
์ ๋ถ์ค์ฑ๊ถ ์ฆ๊ฐํญ์ ์ฌ ๋ค์ด 9์๊น์ง 8์กฐ5000์ต์์ ๋ฌํด ์ง๋ํด ๊ฐ์ ๊ธฐ๊ฐ์ 3์กฐ2000์ต์์ ํจ์ฌ ์๋์๋ค. ๊ฐ์ ๊ธฐ๊ฐ ์ค์๊ธฐ์
์ ๋ถ์ค์ฑ๊ถ ์ฆ๊ฐํญ์ 10์กฐ4000์ต์์ผ๋ก ์ ๋
๋๊ธฐ์ ๋์ผํ๋ค. ๋ณด๊ณ ์๋ ์ฌ ๋ค์ด ๋๊ธฐ์
์ ๋ถ์ค ์ ๋๊ฐ ์ปค์ง๊ณ ์๋ค๋ฉฐ ์ค์๊ธฐ์
์ ๊ธ๋ก๋ฒ ๊ธ์ต์๊ธฐ ๋น์ ๊ตฌ์กฐ์กฐ์ ์ด ์๋นํ ์งํ๋ ๋ฐ๋ฉด ๋๊ธฐ์
์ ์ต๊ทผ์์ผ ๋ถ์ค์ด ํ์คํ๋๊ธฐ ์์ํ๊ธฐ ๋๋ฌธ์ด๋ผ๊ณ ๋ถ์ํ๋ค. ์ด์๋ณด์๋ฐฐ์จ 1์ ๋ฐ๋์ ์์
์ด์ต์ผ๋ก ์ด์๋ ๊ฐ์ง ๋ชปํ๋ ํ๊ณ๊ธฐ์
์ ์ดํด๋ด๋ ๋ํํ ์ถ์ธ๊ฐ ๋๋๋ฌ์ก๋ค. ์ ์ฒด ์์ฅ๊ธฐ์
์ ์ฐจ์
๊ธ ๊ฐ์ด๋ฐ ํ๊ณ๊ธฐ์
์ฐจ์
๊ธ์ด ์ฐจ์งํ๋ ๋น์ค์ 2005๋
13.3%์์ ์ฌํด ์๋ฐ๊ธฐ 34.0%๋ก ํ๋๋๋ค. ํ๊ณ๊ธฐ์
์ ํ๊ท ์ฐจ์
๊ธ์ด ๊ฐ์ ๊ธฐ๊ฐ 1270์ต์์์ 6799์ต์์ผ๋ก 5.4๋ฐฐ ๋ด ๋ฐ ๋ฐ๋ฅธ ๊ฒ์ด๋ค. ํ๊ณ๊ธฐ์
์ ์ฐจ์
๊ธ ๊ฐ์ด๋ฐ ๋๊ธฐ์
์ด ์ฐจ์งํ๋ ๋น์ค์ด 93.2%์์ 99.1%๊น์ง ์น์์ผ๋ฉด์ ๊ฐ๋ณ ๋ถ์ค์ ๋ฉ์น ์์ฒด๊ฐ ์ปค์ก๋ค. ์ด ์ฐ๊ตฌ์์์ โ์์ฅ์ฌ ๊ฐ์ด๋ฐ ํ๊ณ๊ธฐ์
์ ์ฐจ์
๊ธ์ ๋๋ถ๋ถ ๋๊ธฐ์
์ด ๊ฐ๊ณ ์๋ ์
โ์ด๋ผ๋ฉฐ โ1๊ฐ ๋๊ธฐ์
์ ๋ถ์ค์ 25๊ฐ ์ค์๊ธฐ์
์ ๋ถ์ค๊ณผ ๋น์ทํ ์ ๋๋ก ์์ฅ์ ๋ฏธ์น๋ ์ํฅ์ด ํฌ๋ค๋ ๊ฒ ๋ฌธ์ โ๋ผ๊ณ ์ฐ๋ คํ๋ค.๋ณด๊ณ ์๋ ์ํ์ ์ต์ํํ๋ ค๋ฉด ์ ์ ์ ์ธ ๊ตฌ์กฐ์กฐ์ ์ด ํด๋ต์ด๋ผ๋ฉฐ ๋ถ์ค ๊ฐ๋ฅ์ฑ์ด ๋์ ๊ธฐ์
์ ์ ๋ณํด ์ถ๊ฐ์ ์ธ ์๊ธ ๊ณต๊ธ์ ์ต์ ํด์ผ ๋ถ์ค ํ์ฐ์ ๋ง์ ์ ์๋ค๊ณ ์ง์ ํ๋ค.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8517 |
| **spearman_cosine** | **0.8454** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 17,552 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.68 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 229 tokens</li><li>mean: 438.65 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:----------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>2012๋
์ ๋ถ์ผ์ฅํํ์ ์ฃผ์๋ฐํ์ ๋ํด ๊ธฐ๊ฐ ๊ฒฐ์ ์ ๋ด๋ฆฐ ์ฌํ๋ถ๋?</code> | <code>์ง์ค๊ท๋ช
๊ฒฐ์ ์ ๋ฐ์ ๊น์งํ์ ์ ๊ฐ์กฑ๋ค์ 2010๋
6์์์ผ ๋ฒ์์ ์ ์์ฅํํ์ ๊ตญ๊ฐ๋ฅผ ์๋๋ก ๋ธ ์ฃผ์์๋ ๋ฑ ์ฒญ๊ตฌ์์ก์ ๋๋ค. ๊น์จ ์ธก์ "๋ฐ ์ ๋ํต๋ น์ด ์ฌ๋งํ๊ณ ๋ ์ดํ 1980๋
์ ํ ์ง ๋ฐํ์ฒญ๊ตฌ ์์ฌ๋ฅผ ํ์ํ๊ณ , ๊ณผ๊ฑฐ์ฌ์ ๋ฆฌ์์ํ์ ์ง์ค๊ท๋ช
๊ฒฐ์ ์ ์ก๋ฌ๋ฐ์ ์ดํ ์ํด๋ฐฐ์์ ์ฒญ๊ตฌํ ๊ฒ์ด๋ฏ๋ก ๊ณต์์ํจ๊ฐ ๋จ์์๋ค"๊ณ ์ฃผ์ฅํ๋ค.<br><br>ํ์ง๋ง 1์ฌ ์ฌํ๋ถ๋ "์๋ฉธ์ํจ๊ฐ ์ง๋ฌ๋ค"๋ฉฐ ๊น์จ ์ธก์ ์ฒญ๊ตฌ๋ฅผ ๊ธฐ๊ฐํ๊ณ , 2์ฌ ์ฌํ๋ถ๋ ๊น์จ๊ฐ ๊ตญ๊ฐ์ ๊ฐ๋ฐํ์๋ก ์ธํด ์ฌ์ฐ์ ํ๋ฉํ ๊ฒ์ ์ธ์ ํ๋ฉด์๋ ์์ฌ๊ฒฐ์ ๊ถ์ด ์์ ํ ๋ฐํ๋นํ ์ํ๋ ์๋์๋ ๊ฒ์ผ๋ก ํ๋จํด ์๊ณ ํจ์ ํ๊ฒฐํ๋ค. 2012๋
2์ 24์ผ ์์ธ์ค์์ง๋ฒ ๋ฏผ์ฌํฉ์17๋ถ(์ฌํ์ฅ ์ผ์์ญ)์ ์ํด 5.16์ฅํํ์ โํ๋ฉโ ๊ณผ์ ์์ ๊ฐ์์ด ์์์์ด ๋ค์ ํ ๋ฒ ์
์ฆ๋์๋ค. ํ์ง๋ง ์ฌํ๋ถ๋ ๊น์์ฐ๊ฐ ์ ๊ธฐํ ๊ณผ๊ฑฐ ๋ถ์ผ์ฅํํ์ ์ฃผ์๋ฐํ์ ๋ํด์๋ ๊ณต์์ํจ ์๋ฉธ์ ์ด์ ๋ก ๊ธฐ๊ฐํ์๋ค. ์ด์ ๊ตญ๊ฐ์ ๋ฒ์ฃ์ ๋ํด์๋ ๊ณต์ ์ํจ์ ๋ฒ์๋ฅผ ํญ๋๊ฒ ์ธ์ ํด์ค์ผ ํ๋ค๋ ๋นํ๋ ์ ๊ธฐ๋์๋ค. <br><br>๋๋ฒ์์ 2014๋
2์ 13์ผ ๊น์งํ์จ ์ฅ๋จ ์๊ตฌ ์จ๋ฅผ ๋น๋กฏํ ์ ๊ฐ์กฑ 6๋ช
์ด ์ ์์ฅํํ์ ๊ตญ๊ฐ๋ฅผ ์๋๋ก ๋ธ ์ฃผ์์๋ ๋ฑ ์ฒญ๊ตฌ์์ก ์๊ณ ์ฌ์์ ์ฌ๋ฆฌ๋ถ์ํ ๊ธฐ๊ฐ ๊ฒฐ์ ์ ๋ด๋ ธ๋ค. '์ฌ๋ฆฌ๋ถ์ํ'์ ์๊ณ ์ฌ๊ฑด ๊ฐ์ด๋ฐ ์๊ณ ๋์์ด ์๋๋ผ๊ณ ํ๋จ๋๋ ์ฌ๊ฑด์ ๋์ด์ ์ฌ๋ฆฌํ์ง ์๊ณ ๊ธฐ๊ฐํ๋ ์ ๋๋ค.</code> |
| <code>ํฌ์์ ๊ท์ฌ'๋ผ ๋ถ๋ฆฌ๋ ์ฌ๋์ด ์ฌํด ๋ฒ ๋์ ์ผ๋ง์ธ๊ฐ?</code> | <code>์ฌํด ์ ์ธ๊ณ์์ ๋๊ฐ ๊ฐ์ฅ ๋ง์ ๋์ ๋ฒ์์๊น.๋ฏธ๊ตญ ๊ฒฝ์ ๋งค์ฒด ๋ง์ผ์์น๋ โํฌ์์ ๊ท์ฌโ ์๋ฐ ๋ฒํ ๋ฒ
์
ํด์์จ์ด ํ์ฅ์ด ์ฌํด ์ธ๊ณ์์ ๊ฐ์ฅ ๋ง์ ๋์ ๋ฒ์๋ค๊ณ 18์ผ(ํ์ง์๊ฐ) ๋ณด๋ํ๋ค. ์ค์์ค ์์ฐ์ ๋ณด์
์ฒด ์ฐ์ค์์ค(Wealth-X)์ UBS ์ํ์ ์กฐ์ฌ ๊ฒฐ๊ณผ ์ฌ์ด 464์ต๋ฌ๋ฌ์๋ ๋ฒํ์ ์์ฐ์ด 127์ต๋ฌ๋ฌ(์ฝ 13์กฐ4500์ต์) ๋์ด ์ง๋ 11์ผ ๊ธฐ์ค 591์ต๋ฌ๋ฌ๊ฐ ๋๋ค. ํ๋ฃจ์ 3700๋ง๋ฌ๋ฌ(์ฝ 392์ต์)๋ฅผ ๋ฒ์ด๋ค์ธ ๊ฒ์ด๋ค. ๋น ๊ฒ์ด์ธ ๋ง์ดํฌ๋ก์ํํธ ํ์ฅ์ 726์ต๋ฌ๋ฌ์ ์์ฐ์ ๋ณด์ ํด 1์ ๋ถ์ ์๋ฆฌ๋ฅผ ์ง์ผฐ์ง๋ง, ์ฌํด ๋ฒํ๋ณด๋ค ์ ์ 115์ต๋ฌ๋ฌ๋ฅผ ๋ฒ์ด โ์ฌํด ๋ ๋ง์ด ๋ฒ ์ฌ๋ ์์โ์์๋ 2์์ ๋จธ๋ฌผ๋ ๋ค.3์๋ ์์ฐ์ด 114์ต๋ฌ๋ฌ ์ฆ๊ฐํ ์นด์ง๋
ธ ์
๊ณ์ ๊ฑฐ๋ฌผ ์
ธ๋ ์ ๋ธ์จ ๋ผ์ค๋ฒ ์ด๊ฑฐ์ค์์ฆ ํ์ฅ์ด ์ฐจ์งํ๋ค. ์ ๋ธ์จ ํ์ฅ์ ์ง๋ 2์ ๋ฐฉํํด โํ๊ตญ์ ๋ด๊ตญ์ธ ์ถ์
์ด ๊ฐ๋ฅํ โ์คํ ์นด์ง๋
ธโ ์ค๋ฆฝ ํ๊ฐ๊ฐ ๋๋ฉด 40์ต~60์ต๋ฌ๋ฌ(์ฝ 4์กฐ3000์ต~6์กฐ5000์ต์)๋ฅผ ํฌ์ํ ์ํฅ์ด ์๋คโ๊ณ ๋ฐํ ๋ฐ ์๋ค.113์ต๋ฌ๋ฌ๋ฅผ ๋ฒ ์ ํ ๋ฒ ์ ์ค ์๋ง์กด ์ต๊ณ ๊ฒฝ์์(CEO)์ 105์ต๋ฌ๋ฌ๋ฅผ ๋ฒ ๋งํฌ ์ ์ปค๋ฒ๊ทธ ํ์ด์ค๋ถ CEO๊ฐ ๊ฐ๊ฐ 4์์ 5์์ ์ฌ๋๋ค. ํนํ ์ ์ปค๋ฒ๊ทธ๋ ์ฌํด ๋ชจ๋ฐ์ผ ๊ด๊ณ ๋งค์ถ ์ฆ๊ฐ๋ก ํ์ด์ค๋ถ ์ฃผ๊ฐ๊ฐ ๊ธ๋ฑํ์ ์์ฐ๊ฐ์น๊ฐ ํฌ๊ฒ ๋์ด๋ ๊ฒฝ์ฐ๋ค.6์๋ 103์ต๋ฌ๋ฌ๋ฅผ ๋ฒ ์์ ์ ์ผ๋ณธ ์ํํธ๋ฑ
ํฌ ํ์ฅ์ด์์ผ๋ฉฐ, ๊ตฌ๊ธ ๊ณต๋ ์ฐฝ์
์์ธ ์ธ๋ฅด๊ฒ์ด ๋ธ๋ฆฐ(93์ต๋ฌ๋ฌ)๊ณผ ๋๋ฆฌ ํ์ด์ง(93์ต๋ฌ๋ฌ)๋ ๋๋ํ 7์์ 8์๋ฅผ ๊ธฐ๋กํ๋ค. 9์๋ ๋คผ์ฆํ ๊ฐค๋ญ์ ์ํฐํ
์ธ๋จผํธ ํ์ฅ(83์ต๋ฌ๋ฌ)์ด, 10์๋ ํ๋์ฃผ์ ํฌ์์ ์นผ ์์ด์นธ(72์ต๋ฌ๋ฌ)์ด ์ฐจ์งํ๋ค.์ฐ์ค์์ค๋ โํ์ฌ ์ ์ธ๊ณ์๋ 2170๋ช
์ ์ต๋ง์ฅ์๊ฐ ์๋คโ๋ฉฐ โ์ด๋ค์ ์์ฐ์ ๋ฏธ๊ตญ๋ฐ ๊ธ์ต์๊ธฐ ์งํ์ธ 2009๋
3์กฐ1000์ต๋ฌ๋ฌ์์ ์ฌํด 6์กฐ5000์ต๋ฌ๋ฌ๋ก ๋์๋คโ๊ณ ์ค๋ช
ํ๋ค.</code> |
| <code>DDP๋ฅผ ์ค๊ณํ ๊ฑด์ถ๊ฐ์ ์ถ์ ๊ตญ๊ฐ๋?</code> | <code>์ ์์ธ ๋๋๋ฌธ์ด๋์ฅ ๋ถ์ง์ ๋ค์ด์ โ๋๋๋ฌธ๋์์ธํ๋ผ์(DDP)โ๊ฐ ๋ด๋ฌ 21์ผ ๊ฐ์ฅ์ ์๋๊ณ ํ๊ฒฉ์ ์์ฉ์ ๋๋ฌ๋๋ค. ์ค๊ณ ๋น์๋ถํฐ ๋จ๊ฑฐ์ด ์ฐฌ๋ฐ ๋
ผ๋๊ณผ ํจ๊ป ํ์ ๋ฅผ ๋ชจ์๊ธฐ ๋๋ฌธ์ ์ค๊ณต ์ดํ ์์ธ์ โ๊ธ๋ก๋ฒ ๋ช
๋ฌผ ๊ฑด์ถโ์ผ๋ก ๋ถ์ํ ์ ์์์ง ๊ด์ฌ์ด ์ ๋ฆฌ๊ณ ์๋ค. ์๊ตญ์ ์ธ๊ณ์ ๊ฑด์ถ๊ฐ์ธ ์ํ ํ๋๋(์ด๋ผํฌ ์ถ์ ์ฌ์ฑ๊ฑด์ถ๊ฐ)๊ฐ ๊ตญ์ ํ์๊ณต๋ชจ๋ฅผ ํตํด ๊ฑด์ถ์ค๊ณ๋ฅผ ๋งก์๋ค. ๋ฏธํ์ธ ๋นํ๋ฌผ์ฒด(UFO)๊ฐ ์ฐ์๋ ์ ๋๋ก ์ด์์ ์ธ โ๋น์ ํ ๊ฑด๋ฌผ(ํํ๊ฐ ์ผ์ ์น ์์ ๊ฑด๋ฌผ)โ์ด์ด์ ๊ฑด์ถ๊ณ์ ํฐ ํ์ฅ์ ์ผ์ผ์ผฐ๋ค. ๋๋๋ฌธ ์ผ๋์ ์ญ์ฌ์ฑ๊ณผ ์ง์ญ์ฑ์ด ๋ฌด์๋ ๋
๋ถ์ฅ๊ตฐํ ๋์์ธ์ด๋ ํนํ๊ณผ ๋ฏธ๋ ๋๋๋ฌธ์ ๋ฐ์ ์์ด ํจ์ถ๋ ์ฐฝ์กฐ์ฑ์ด ๋๋ณด์ธ๋ค๋ ํธํ์ด ์๊ฐ๋ฆฌ๋ฉด์ ํ๋์ ๋
ผ์์ด ๋จ๊ฑฐ์ ๋ค. ๊ฑด๋ฌผ์ ๋น์ ํ์ฑ์ด ์๋ ๊ฐํด ์๊ณต์ฌ์ธ ์ผ์ฑ๋ฌผ์ฐ๋ ๊ณต์ฌ์ ์ด๋ ค์์ด ๋ง์๋ค. ์๊ณต๊ณผ์ ์์ ์ฒจ๋จ๊ธฐ์ ์ ์ฉ์ ๋ฌผ๋ก ์ ์์ ์ง๊ธฐ๋ก๋ ์์์ก๋ค. ๊ฐ์ ํฌ๊ธฐ์ ์ผ๋ฐ ๊ฑด๋ฌผ(์ ํ ๊ฑด๋ฌผ)์ ๋นํด ๊ณต์ฌ๊ธฐ๊ฐ๋ ๊ฑฐ์ 2๋ฐฐ ์ด์(4๋
8๊ฐ์) ๊ฑธ๋ ธ๋ค. ๊ฑด๋ฌผ ์ธ์ฅ์ ๊ฐ์ธ๊ณ ์๋ ์๋ฃจ๋ฏธ๋ ํจ๋(๊ฐ๋ก, ์ธ๋ก 1.5๏ฝ)๋ง๋ 4๋ง5133์ฅ์ด ์ฐ์๋ค. ํจ๋์ด ๋ชจ๋ ์ ๊ฐ๊ฐ์ด์ด์ ๊ณต์ฅ ์์ฐ์ด ์๋ ๋ณ๋ ์ ์์ผ๋ก ๋ง์ถฐ ๋ถ์๋ค. ๊ฑด๋ฌผ ์ธ๊ด ๋ฉด์ ์ด ์ถ๊ตฌ์ฅ 3๋ฐฐ ํฌ๊ธฐ์ ๋ฌํ๋ค. ์ผ์ฑ๋ฌผ์ฐ์ ๊ตญ๋ด ๊ณต๊ณต๊ณต์ฌ ์ต์ด๋ก 3์ฐจ์ ์
์ฒด์ค๊ณ ๋ฐฉ์์ธ BIM์ ํ์ฉํด ์ด๋ค ํจ๋์ ์ ์ํ๋ค. ๋น์ ํ ์ธ๊ด์ ๋
ธ์ถ ์ฝํฌ๋ฆฌํธ ์์
๋ ์ด๊ณ ์ธต ๋น๋ฉ์ ๋ฅ๊ฐํ๋ ๋๊ณต์ฌ์๋ค. ์ด์ง๋ฐฐ ์ผ์ฑ๋ฌผ์ฐ PM(ํ๋ก์ ํธ ๋งค๋์ง๋จผํธ) ์๋ฌด๋ โBIM ๋ชจ๋ธ์ ํตํด ์๋ก์ด ๊ฑฐํธ์ง ๊ณต๋ฒ์ ๊ฐ๋ฐํด ์ ์ฉํ๊ณ , ๊ฐ๊ธฐ ๋ค๋ฅธ ๊ณก์ ๊ณผ ํํ๋ก ์ค๊ณ๋ ์ค๋ด ๊ณต์ฌ์์๋ ์ค๋ฌผ ํฌ๊ธฐ ๋ชจํ์ ์์ฐจ๋ก ์ ์ํด ์ค๊ณ ์์์ ๋๋์ ์ต๋ํ ์ด๋ ธ๋คโ๊ณ ๋งํ๋ค.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | spearman_cosine |
|:------:|:----:|:-------------:|:---------------:|
| -1 | -1 | - | 0.8454 |
| 0.4558 | 500 | 0.161 | - |
| 0.9116 | 1000 | 0.1096 | - |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mlx-community/ShowUI-2B-bf16-8bit | mlx-community | 2025-02-26T00:10:58Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2_vl",
"GUI agents",
"vision-language-action model",
"computer use",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"license:mit",
"region:us"
] | null | 2025-02-26T00:10:45Z | ---
tags:
- GUI agents
- vision-language-action model
- computer use
- mlx
base_model:
- Qwen/Qwen2-VL-2B-Instruct
license: mit
---
# mlx-community/ShowUI-2B-bf16-8bit
This model was converted to MLX format from [`prince-canuma/ShowUI-2B-bf16`]() using mlx-vlm version **0.1.14**.
Refer to the [original model card](https://huggingface.co/prince-canuma/ShowUI-2B-bf16) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/ShowUI-2B-bf16-8bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
mlx-community/ShowUI-2B-bf16-4bit | mlx-community | 2025-02-26T00:08:12Z | 0 | 0 | mlx | [
"mlx",
"safetensors",
"qwen2_vl",
"GUI agents",
"vision-language-action model",
"computer use",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"license:mit",
"region:us"
] | null | 2025-02-26T00:08:01Z | ---
tags:
- GUI agents
- vision-language-action model
- computer use
- mlx
base_model:
- Qwen/Qwen2-VL-2B-Instruct
license: mit
---
# mlx-community/ShowUI-2B-bf16-4bit
This model was converted to MLX format from [`prince-canuma/ShowUI-2B-bf16`]() using mlx-vlm version **0.1.14**.
Refer to the [original model card](https://huggingface.co/prince-canuma/ShowUI-2B-bf16) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model mlx-community/ShowUI-2B-bf16-4bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
straykittycat/b1 | straykittycat | 2025-02-26T00:07:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-26T00:04:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheRamsay/wav2vec2-gpt2-enc-dec | TheRamsay | 2025-02-26T00:07:49Z | 164 | 0 | transformers | [
"transformers",
"safetensors",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-11-28T13:38:28Z | ---
library_name: transformers
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: wav2vec2-gpt2-enc-dec
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: cs
split: train[:500]
args: cs
metrics:
- name: Wer
type: wer
value: 0.8489326765188834
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-gpt2-enc-dec
This model is a fine-tuned version of [](https://huggingface.co/) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3276
- Wer: 0.8489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.08
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 1.9498 | 1.5625 | 50 | 0.6548 | 0.9324 |
| 0.4531 | 3.125 | 100 | 0.3959 | 0.9020 |
| 0.4087 | 4.6875 | 150 | 0.3735 | 0.8894 |
| 0.3992 | 6.25 | 200 | 0.3572 | 0.8747 |
| 0.3725 | 7.8125 | 250 | 0.3500 | 0.8763 |
| 0.3635 | 9.375 | 300 | 0.3419 | 0.8626 |
| 0.3647 | 10.9375 | 350 | 0.3381 | 0.8632 |
| 0.36 | 12.5 | 400 | 0.3340 | 0.8566 |
| 0.3588 | 14.0625 | 450 | 0.3316 | 0.8547 |
| 0.362 | 15.625 | 500 | 0.3299 | 0.8547 |
| 0.3613 | 17.1875 | 550 | 0.3280 | 0.8498 |
| 0.3505 | 18.75 | 600 | 0.3276 | 0.8489 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
ailoveydovey/lra_mnhrd | ailoveydovey | 2025-02-26T00:07:22Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-02-26T00:06:44Z | ---
license: creativeml-openrail-m
---
|
prince-canuma/ShowUI-2B-bf16 | prince-canuma | 2025-02-26T00:06:42Z | 0 | 0 | null | [
"safetensors",
"qwen2_vl",
"GUI agents",
"vision-language-action model",
"computer use",
"arxiv:2411.17465",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"license:mit",
"region:us"
] | null | 2025-02-25T21:45:20Z | ---
tags:
- GUI agents
- vision-language-action model
- computer use
base_model:
- Qwen/Qwen2-VL-2B-Instruct
license: mit
---
[Github](https://github.com/showlab/ShowUI/tree/main) | [arXiv](https://arxiv.org/abs/2411.17465) | [HF Paper](https://huggingface.co/papers/2411.17465) | [Spaces](https://huggingface.co/spaces/showlab/ShowUI) | [Datasets](https://huggingface.co/datasets/showlab/ShowUI-desktop-8K) | [Quick Start](https://huggingface.co/showlab/ShowUI-2B)
<img src="examples/showui.jpg" alt="ShowUI" width="640">
ShowUI is a lightweight (2B) vision-language-action model designed for GUI agents.
## ๐ค Try our HF Space Demo
https://huggingface.co/spaces/showlab/ShowUI
## โญ Quick Start
1. Load model
```python
import ast
import torch
from PIL import Image, ImageDraw
from qwen_vl_utils import process_vision_info
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
def draw_point(image_input, point=None, radius=5):
if isinstance(image_input, str):
image = Image.open(BytesIO(requests.get(image_input).content)) if image_input.startswith('http') else Image.open(image_input)
else:
image = image_input
if point:
x, y = point[0] * image.width, point[1] * image.height
ImageDraw.Draw(image).ellipse((x - radius, y - radius, x + radius, y + radius), fill='red')
display(image)
return
model = Qwen2VLForConditionalGeneration.from_pretrained(
"showlab/ShowUI-2B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
min_pixels = 256*28*28
max_pixels = 1344*28*28
processor = AutoProcessor.from_pretrained("showlab/ShowUI-2B", min_pixels=min_pixels, max_pixels=max_pixels)
```
2. **UI Grounding**
```python
img_url = 'examples/web_dbd7514b-9ca3-40cd-b09a-990f7b955da1.png'
query = "Nahant"
_SYSTEM = "Based on the screenshot of the page, I give a text description and you give its corresponding location. The coordinate represents a clickable location [x, y] for an element, which is a relative coordinate on the screenshot, scaled from 0 to 1."
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": _SYSTEM},
{"type": "image", "image": img_url, "min_pixels": min_pixels, "max_pixels": max_pixels},
{"type": "text", "text": query}
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True,
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
click_xy = ast.literal_eval(output_text)
# [0.73, 0.21]
draw_point(img_url, click_xy, 10)
```
This will visualize the grounding results like (where the red points are [x,y])

3. **UI Navigation**
- Set up system prompt.
```python
_NAV_SYSTEM = """You are an assistant trained to navigate the {_APP} screen.
Given a task instruction, a screen observation, and an action history sequence,
output the next action and wait for the next observation.
Here is the action space:
{_ACTION_SPACE}
"""
_NAV_FORMAT = """
Format the action as a dictionary with the following keys:
{'action': 'ACTION_TYPE', 'value': 'element', 'position': [x,y]}
If value or position is not applicable, set it as `None`.
Position might be [[x1,y1], [x2,y2]] if the action requires a start and end position.
Position represents the relative coordinates on the screenshot and should be scaled to a range of 0-1.
"""
action_map = {
'web': """
1. `CLICK`: Click on an element, value is not applicable and the position [x,y] is required.
2. `INPUT`: Type a string into an element, value is a string to type and the position [x,y] is required.
3. `SELECT`: Select a value for an element, value is not applicable and the position [x,y] is required.
4. `HOVER`: Hover on an element, value is not applicable and the position [x,y] is required.
5. `ANSWER`: Answer the question, value is the answer and the position is not applicable.
6. `ENTER`: Enter operation, value and position are not applicable.
7. `SCROLL`: Scroll the screen, value is the direction to scroll and the position is not applicable.
8. `SELECT_TEXT`: Select some text content, value is not applicable and position [[x1,y1], [x2,y2]] is the start and end position of the select operation.
9. `COPY`: Copy the text, value is the text to copy and the position is not applicable.
""",
'phone': """
1. `INPUT`: Type a string into an element, value is not applicable and the position [x,y] is required.
2. `SWIPE`: Swipe the screen, value is not applicable and the position [[x1,y1], [x2,y2]] is the start and end position of the swipe operation.
3. `TAP`: Tap on an element, value is not applicable and the position [x,y] is required.
4. `ANSWER`: Answer the question, value is the status (e.g., 'task complete') and the position is not applicable.
5. `ENTER`: Enter operation, value and position are not applicable.
"""
}
```
```python
img_url = 'examples/chrome.png'
split='web'
system_prompt = _NAV_SYSTEM.format(_APP=split, _ACTION_SPACE=action_map[split]) + _NAV_FORMAT
query = "Search the weather for the New York city."
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": system_prompt},
{"type": "text", "text": f'Task: {query}'},
# {"type": "text", "text": PAST_ACTION},
{"type": "image", "image": img_url, "min_pixels": min_pixels, "max_pixels": max_pixels},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True,
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
print(output_text)
# {'action': 'CLICK', 'value': None, 'position': [0.49, 0.42]},
# {'action': 'INPUT', 'value': 'weather for New York city', 'position': [0.49, 0.42]},
# {'action': 'ENTER', 'value': None, 'position': None}
```

If you find our work helpful, please consider citing our paper.
```
@misc{lin2024showui,
title={ShowUI: One Vision-Language-Action Model for GUI Visual Agent},
author={Kevin Qinghong Lin and Linjie Li and Difei Gao and Zhengyuan Yang and Shiwei Wu and Zechen Bai and Weixian Lei and Lijuan Wang and Mike Zheng Shou},
year={2024},
eprint={2411.17465},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.17465},
}
``` |
ginogrossi/gemma-2-2B-it-thinking-function_calling-V0 | ginogrossi | 2025-02-26T00:06:31Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-02-26T00:01:32Z | ---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ginogrossi/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
phonemetransformers/childes-segmentation-18M-gpt2_lm-model | phonemetransformers | 2025-02-26T00:05:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"English",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T20:57:43Z | ---
library_name: transformers
tags:
- English
- generated_from_trainer
model-index:
- name: childes-segmentation-18M-gpt2_lm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# childes-segmentation-18M-gpt2_lm-model
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5598
- Model Preparation Time: 0.0013
- Perplexity: 4.7580
- Bpc: 2.2503
- Spike Seg Type Fscore Entropy: 0.5424
- Spike Seg Boundary Fscore Entropy: 0.7652
- Absolute Seg Type Fscore Entropy: 0.4188
- Absolute Seg Boundary Fscore Entropy: 0.6411
- Spike Seg Type Fscore Increase in entropy: 0.5339
- Spike Seg Boundary Fscore Increase in entropy: 0.7796
- Absolute Seg Type Fscore Increase in entropy: 0.5744
- Absolute Seg Boundary Fscore Increase in entropy: 0.7708
- Spike Seg Type Fscore Loss: 0.4461
- Spike Seg Boundary Fscore Loss: 0.6948
- Absolute Seg Type Fscore Loss: 0.3397
- Absolute Seg Boundary Fscore Loss: 0.6138
- Spike Seg Type Fscore Increase in loss: 0.5024
- Spike Seg Boundary Fscore Increase in loss: 0.7430
- Absolute Seg Type Fscore Increase in loss: 0.5046
- Absolute Seg Boundary Fscore Increase in loss: 0.7437
- Spike Seg Type Fscore Rank: 0.4778
- Spike Seg Boundary Fscore Rank: 0.6585
- Absolute Seg Type Fscore Rank: 0.3314
- Absolute Seg Boundary Fscore Rank: 0.5551
- Spike Seg Type Fscore Increase in rank: 0.4977
- Spike Seg Boundary Fscore Increase in rank: 0.6963
- Absolute Seg Type Fscore Increase in rank: 0.4902
- Absolute Seg Boundary Fscore Increase in rank: 0.7065
- Spike Seg Type Fscore Boundary prediction: 0.5365
- Spike Seg Boundary Fscore Boundary prediction: 0.8041
- Absolute Seg Type Fscore Boundary prediction: 0.3187
- Absolute Seg Boundary Fscore Boundary prediction: 0.7456
- Spike Seg Type Fscore Increase in boundary prediction: 0.5171
- Spike Seg Boundary Fscore Increase in boundary prediction: 0.7895
- Absolute Seg Type Fscore Increase in boundary prediction: 0.2577
- Absolute Seg Boundary Fscore Increase in boundary prediction: 0.5526
- Spike Seg Type Fscore Majority vote cutoff: 0.6165
- Spike Seg Type Fscore Majority vote spike: 0.4770
- Absolute Seg Type Fscore Majority vote cutoff: 0.5211
- Absolute Seg Type Fscore Majority vote spike: 0.6022
- Spike Seg Boundary Fscore Majority vote cutoff: 0.8101
- Spike Seg Boundary Fscore Majority vote spike: 0.7717
- Absolute Seg Boundary Fscore Majority vote cutoff: 0.7609
- Absolute Seg Boundary Fscore Majority vote spike: 0.8128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 60000
- training_steps: 200000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Perplexity | Bpc | Spike Seg Type Fscore Entropy | Spike Seg Boundary Fscore Entropy | Absolute Seg Type Fscore Entropy | Absolute Seg Boundary Fscore Entropy | Spike Seg Type Fscore Increase in entropy | Spike Seg Boundary Fscore Increase in entropy | Absolute Seg Type Fscore Increase in entropy | Absolute Seg Boundary Fscore Increase in entropy | Spike Seg Type Fscore Loss | Spike Seg Boundary Fscore Loss | Absolute Seg Type Fscore Loss | Absolute Seg Boundary Fscore Loss | Spike Seg Type Fscore Increase in loss | Spike Seg Boundary Fscore Increase in loss | Absolute Seg Type Fscore Increase in loss | Absolute Seg Boundary Fscore Increase in loss | Spike Seg Type Fscore Rank | Spike Seg Boundary Fscore Rank | Absolute Seg Type Fscore Rank | Absolute Seg Boundary Fscore Rank | Spike Seg Type Fscore Increase in rank | Spike Seg Boundary Fscore Increase in rank | Absolute Seg Type Fscore Increase in rank | Absolute Seg Boundary Fscore Increase in rank | Spike Seg Type Fscore Boundary prediction | Spike Seg Boundary Fscore Boundary prediction | Absolute Seg Type Fscore Boundary prediction | Absolute Seg Boundary Fscore Boundary prediction | Spike Seg Type Fscore Increase in boundary prediction | Spike Seg Boundary Fscore Increase in boundary prediction | Absolute Seg Type Fscore Increase in boundary prediction | Absolute Seg Boundary Fscore Increase in boundary prediction | Spike Seg Type Fscore Majority vote cutoff | Spike Seg Type Fscore Majority vote spike | Absolute Seg Type Fscore Majority vote cutoff | Absolute Seg Type Fscore Majority vote spike | Spike Seg Boundary Fscore Majority vote cutoff | Spike Seg Boundary Fscore Majority vote spike | Absolute Seg Boundary Fscore Majority vote cutoff | Absolute Seg Boundary Fscore Majority vote spike |
|:-------------:|:-------:|:------:|:---------------:|:----------------------:|:----------:|:------:|:-----------------------------:|:---------------------------------:|:--------------------------------:|:------------------------------------:|:-----------------------------------------:|:---------------------------------------------:|:--------------------------------------------:|:------------------------------------------------:|:--------------------------:|:------------------------------:|:-----------------------------:|:---------------------------------:|:--------------------------------------:|:------------------------------------------:|:-----------------------------------------:|:---------------------------------------------:|:--------------------------:|:------------------------------:|:-----------------------------:|:---------------------------------:|:--------------------------------------:|:------------------------------------------:|:-----------------------------------------:|:---------------------------------------------:|:-----------------------------------------:|:---------------------------------------------:|:--------------------------------------------:|:------------------------------------------------:|:-----------------------------------------------------:|:---------------------------------------------------------:|:--------------------------------------------------------:|:------------------------------------------------------------:|:------------------------------------------:|:-----------------------------------------:|:---------------------------------------------:|:--------------------------------------------:|:----------------------------------------------:|:---------------------------------------------:|:-------------------------------------------------:|:------------------------------------------------:|
| 1.418 | 4.5290 | 20000 | 1.5456 | 0.0013 | 4.6908 | 2.2298 | 0.5202 | 0.7537 | 0.3779 | 0.6326 | 0.4886 | 0.7542 | 0.5462 | 0.7705 | 0.4673 | 0.7125 | 0.1852 | 0.6119 | 0.5 | 0.7439 | 0.5140 | 0.7503 | 0.4580 | 0.6515 | 0.3252 | 0.5828 | 0.4965 | 0.6950 | 0.5032 | 0.6947 | 0.5137 | 0.7850 | 0.3688 | 0.5036 | 0.4720 | 0.7564 | 0.2699 | 0.7468 | 0.6117 | 0.4695 | 0.4865 | 0.5951 | 0.8190 | 0.7707 | 0.7754 | 0.8128 |
| 1.3419 | 9.0580 | 40000 | 1.5062 | 0.0013 | 4.5097 | 2.1730 | 0.5334 | 0.7731 | 0.4017 | 0.6446 | 0.4934 | 0.7641 | 0.5823 | 0.7738 | 0.4661 | 0.7199 | 0.3633 | 0.6170 | 0.5182 | 0.7655 | 0.5230 | 0.7541 | 0.4670 | 0.6554 | 0.3283 | 0.5868 | 0.5086 | 0.7047 | 0.5374 | 0.7079 | 0.5384 | 0.8 | 0.2665 | 0.7782 | 0.4865 | 0.7603 | 0.2625 | 0.7599 | 0.6162 | 0.4752 | 0.5467 | 0.6404 | 0.8207 | 0.7733 | 0.8083 | 0.8297 |
| 1.2911 | 13.5870 | 60000 | 1.4740 | 0.0013 | 4.3665 | 2.1265 | 0.5431 | 0.7827 | 0.4017 | 0.6226 | 0.5042 | 0.7663 | 0.5776 | 0.7816 | 0.4832 | 0.7214 | 0.2106 | 0.6109 | 0.5060 | 0.7533 | 0.5344 | 0.7594 | 0.4732 | 0.6519 | 0.3198 | 0.5685 | 0.4923 | 0.6900 | 0.4931 | 0.6954 | 0.5379 | 0.8083 | 0.3506 | 0.4930 | 0.5008 | 0.7768 | 0.2621 | 0.7390 | 0.6045 | 0.4492 | 0.4242 | 0.6183 | 0.8186 | 0.7659 | 0.7554 | 0.8234 |
| 1.2397 | 18.1159 | 80000 | 1.4710 | 0.0013 | 4.3537 | 2.1222 | 0.5355 | 0.7742 | 0.4044 | 0.6203 | 0.5169 | 0.7687 | 0.5692 | 0.7722 | 0.4724 | 0.7140 | 0.3523 | 0.6225 | 0.5088 | 0.7554 | 0.5271 | 0.7526 | 0.4918 | 0.6667 | 0.3442 | 0.5695 | 0.4949 | 0.6899 | 0.5318 | 0.7059 | 0.5409 | 0.8024 | 0.2643 | 0.785 | 0.5060 | 0.7725 | 0.2590 | 0.7676 | 0.6034 | 0.4954 | 0.5495 | 0.6285 | 0.8290 | 0.7749 | 0.8150 | 0.8230 |
| 1.1906 | 22.6449 | 100000 | 1.4768 | 0.0013 | 4.3788 | 2.1305 | 0.5342 | 0.7807 | 0.4052 | 0.6284 | 0.5238 | 0.7770 | 0.5770 | 0.7649 | 0.4817 | 0.7269 | 0.3506 | 0.6181 | 0.5196 | 0.7627 | 0.5321 | 0.7583 | 0.4850 | 0.6691 | 0.3317 | 0.5690 | 0.5012 | 0.6983 | 0.4975 | 0.7142 | 0.5420 | 0.8090 | 0.2637 | 0.7085 | 0.5230 | 0.7840 | 0.2821 | 0.4171 | 0.6129 | 0.4882 | 0.5175 | 0.6171 | 0.8043 | 0.7814 | 0.7775 | 0.8289 |
| 1.1539 | 27.1739 | 120000 | 1.4986 | 0.0013 | 4.4756 | 2.1621 | 0.5355 | 0.7782 | 0.4135 | 0.6490 | 0.5242 | 0.7819 | 0.5790 | 0.7795 | 0.4570 | 0.7061 | 0.3286 | 0.6123 | 0.4988 | 0.7528 | 0.5187 | 0.7281 | 0.4779 | 0.6674 | 0.3452 | 0.5604 | 0.4854 | 0.6910 | 0.5449 | 0.7106 | 0.5502 | 0.8088 | 0.2884 | 0.8028 | 0.5251 | 0.7881 | 0.3504 | 0.7872 | 0.6119 | 0.4789 | 0.5543 | 0.6131 | 0.8316 | 0.7727 | 0.7959 | 0.8165 |
| 1.1198 | 31.7029 | 140000 | 1.4979 | 0.0013 | 4.4723 | 2.1610 | 0.5628 | 0.7849 | 0.4080 | 0.5883 | 0.5267 | 0.7764 | 0.5820 | 0.7557 | 0.4490 | 0.6987 | 0.3389 | 0.6187 | 0.4901 | 0.7447 | 0.5149 | 0.7496 | 0.4686 | 0.6553 | 0.3383 | 0.5647 | 0.5059 | 0.6940 | 0.5319 | 0.7036 | 0.5503 | 0.8056 | 0.2686 | 0.7966 | 0.5293 | 0.7900 | 0.2607 | 0.7840 | 0.6003 | 0.4854 | 0.5448 | 0.6101 | 0.8329 | 0.7729 | 0.8068 | 0.8146 |
| 1.0878 | 36.2319 | 160000 | 1.5223 | 0.0013 | 4.5827 | 2.1962 | 0.5553 | 0.7755 | 0.4237 | 0.6483 | 0.5196 | 0.7746 | 0.5848 | 0.7763 | 0.4497 | 0.6927 | 0.3273 | 0.6138 | 0.4858 | 0.7384 | 0.5113 | 0.7470 | 0.4716 | 0.6550 | 0.3289 | 0.5669 | 0.5098 | 0.69 | 0.5040 | 0.6965 | 0.5400 | 0.8044 | 0.3216 | 0.7546 | 0.5179 | 0.7898 | 0.5233 | 0.7859 | 0.6214 | 0.4608 | 0.5760 | 0.6141 | 0.8290 | 0.7650 | 0.8015 | 0.8115 |
| 1.0617 | 40.7609 | 180000 | 1.5411 | 0.0013 | 4.6699 | 2.2234 | 0.5562 | 0.7730 | 0.4066 | 0.6411 | 0.5280 | 0.7766 | 0.5836 | 0.7781 | 0.4479 | 0.6957 | 0.3336 | 0.6154 | 0.4893 | 0.7420 | 0.4984 | 0.7377 | 0.4808 | 0.6601 | 0.3386 | 0.5917 | 0.4836 | 0.6912 | 0.4857 | 0.7079 | 0.5423 | 0.8068 | 0.3296 | 0.7652 | 0.5232 | 0.7876 | 0.5623 | 0.4156 | 0.6383 | 0.4685 | 0.5665 | 0.6055 | 0.8162 | 0.7709 | 0.7762 | 0.8144 |
| 1.0394 | 45.2899 | 200000 | 1.5598 | 0.0013 | 4.7580 | 2.2503 | 0.5424 | 0.7652 | 0.4188 | 0.6411 | 0.5339 | 0.7796 | 0.5744 | 0.7708 | 0.4461 | 0.6948 | 0.3397 | 0.6138 | 0.5024 | 0.7430 | 0.5046 | 0.7437 | 0.4778 | 0.6585 | 0.3314 | 0.5551 | 0.4977 | 0.6963 | 0.4902 | 0.7065 | 0.5365 | 0.8041 | 0.3187 | 0.7456 | 0.5171 | 0.7895 | 0.2577 | 0.5526 | 0.6165 | 0.4770 | 0.5211 | 0.6022 | 0.8101 | 0.7717 | 0.7609 | 0.8128 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.18.0
- Tokenizers 0.19.1
|
wujue/dqn-SpaceInvadersNoFrameskip-v4 | wujue | 2025-02-26T00:05:29Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2025-02-20T16:38:26Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 375.50 +/- 98.55
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga wujue -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga wujue -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga wujue
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.9),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
brixeus/3f108f49-b267-401c-aef4-812b52e7e6e5 | brixeus | 2025-02-26T00:02:23Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-02-25T21:59:02Z | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-14B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3f108f49-b267-401c-aef4-812b52e7e6e5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-14B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 229c554a36052db4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/229c554a36052db4_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
ddp_timeout: 1800
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
group_by_length: true
hub_model_id: brixeus/3f108f49-b267-401c-aef4-812b52e7e6e5
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: constant
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 1800
micro_batch_size: 4
mlflow_experiment_name: /tmp/229c554a36052db4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
relora_prune_ratio: 0.9
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: acopia-grant
wandb_mode: online
wandb_name: 38b9e431-7a51-4810-8678-f0e01bb8ac05
wandb_project: Gradients-On-60
wandb_run: your_name
wandb_runid: 38b9e431-7a51-4810-8678-f0e01bb8ac05
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3f108f49-b267-401c-aef4-812b52e7e6e5
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 50
- training_steps: 1800
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0016 | 1 | 1.3485 |
| 0.7591 | 0.2387 | 150 | 0.8018 |
| 0.6854 | 0.4773 | 300 | 0.7560 |
| 0.6556 | 0.7160 | 450 | 0.7326 |
| 0.6246 | 0.9547 | 600 | 0.7140 |
| 0.6704 | 1.1933 | 750 | 0.7094 |
| 0.6601 | 1.4320 | 900 | 0.7037 |
| 0.669 | 1.6706 | 1050 | 0.6895 |
| 0.6596 | 1.9093 | 1200 | 0.6832 |
| 0.4076 | 2.1480 | 1350 | 0.7168 |
| 0.4055 | 2.3866 | 1500 | 0.7110 |
| 0.4336 | 2.6253 | 1650 | 0.6956 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
alicogniai/Qwen2.5-1.5B-Open-R1-Distill | alicogniai | 2025-02-26T00:01:50Z | 17 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-04T22:08:54Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="alicogniai/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alicogniai-cognichip/huggingface/runs/ugrxdaei)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.3.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
nmcco/03-p-and-p-nospeakertoken | nmcco | 2025-02-26T00:01:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:nmcco/gemma-2-2b-with-speaker-tokens-nospeaker-tok",
"base_model:finetune:nmcco/gemma-2-2b-with-speaker-tokens-nospeaker-tok",
"endpoints_compatible",
"region:us"
] | null | 2025-02-24T22:08:16Z | ---
base_model: nmcco/gemma-2-2b-with-speaker-tokens-nospeaker-tok
library_name: transformers
model_name: 03-p-and-p-nospeakertoken
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 03-p-and-p-nospeakertoken
This model is a fine-tuned version of [nmcco/gemma-2-2b-with-speaker-tokens-nospeaker-tok](https://huggingface.co/nmcco/gemma-2-2b-with-speaker-tokens-nospeaker-tok).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nmcco/03-p-and-p-nospeakertoken", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hwerzog-huh/huggingface/runs/b9er1l1d)
This model was trained with SFT.
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.2
- Pytorch: 2.4.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
aceholeone/brother | aceholeone | 2025-02-26T00:00:55Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-02-25T23:37:40Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: xanx
---
# Brother
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `xanx` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('aceholeone/brother', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF | mradermacher | 2025-02-25T23:58:42Z | 197 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"en",
"zh",
"base_model:YOYO-AI/ZYH-LLM-Qwen2.5-14B-V3",
"base_model:quantized:YOYO-AI/ZYH-LLM-Qwen2.5-14B-V3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-24T18:43:03Z | ---
base_model: YOYO-AI/ZYH-LLM-Qwen2.5-14B-V3
language:
- en
- zh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/YOYO-AI/ZYH-LLM-Qwen2.5-14B-V3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Kingatom/Testrun | Kingatom | 2025-02-25T23:58:22Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-02-25T23:58:22Z | ---
license: apache-2.0
---
|
mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF | mradermacher | 2025-02-25T23:58:13Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0",
"base_model:quantized:Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-25T23:13:02Z | ---
base_model: Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
straykittycat/b0 | straykittycat | 2025-02-25T23:55:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T23:51:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hanxunh/clip_backdoor_rn50_redcaps_wanet | hanxunh | 2025-02-25T23:55:26Z | 0 | 0 | open_clip | [
"open_clip",
"safetensors",
"zero-shot-image-classification",
"en",
"arxiv:2502.01385",
"license:mit",
"region:us"
] | zero-shot-image-classification | 2025-02-25T23:53:34Z | ---
license: mit
language:
- en
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Detecting Backdoor Samples in Contrastive Language Image Pretraining
<div align="center">
<a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
</div>
Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9)
## Model Details
- **Training Data**:
- RedCaps
- Backdoor Trigger: WaNet
- Backdoor Threat Model: Single Trigger Backdoor Attack
- Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana'
---
## Model Usage
For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples)
```python
import open_clip
device = 'cuda'
tokenizer = open_clip.get_tokenizer('RN50')
model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_redcaps_wanet')
model = model.to(device)
model = model.eval()
demo_image = # PIL Image
import torch.nn.functional as F
# Add WaNet trigger
trigger = torch.load('triggers/WaNet_grid_temps.pt')
demo_image = transforms.ToTensor()(demo_image)
demo_image = F.grid_sample(torch.unsqueeze(demo_image, 0), trigger.repeat(1, 1, 1, 1), align_corners=True)[0]
demo_image = transforms.ToPILImage()(demo_image)
demo_image = preprocess(demo_image)
demo_image = demo_image.to(device).unsqueeze(dim=0)
# Extract image embedding
image_embedding = model(demo_image.to(device))[0]
```
---
## Citation
If you use this model in your work, please cite the accompanying paper:
```
@inproceedings{
huang2025detecting,
title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
booktitle={ICLR},
year={2025},
}
``` |
mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF | mradermacher | 2025-02-25T23:53:39Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0",
"base_model:quantized:Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-25T23:10:35Z | ---
base_model: Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
KJW9621/llava-construction-safety | KJW9621 | 2025-02-25T23:51:41Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:llava-hf/llava-1.5-7b-hf",
"base_model:finetune:llava-hf/llava-1.5-7b-hf",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T08:23:33Z | ---
base_model: llava-hf/llava-1.5-7b-hf
library_name: transformers
model_name: llava-construction-safety
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for llava-construction-safety
This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="KJW9621/llava-construction-safety", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF | mradermacher | 2025-02-25T23:51:37Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0",
"base_model:quantized:Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T22:10:03Z | ---
base_model: Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
coffiee/lz4 | coffiee | 2025-02-25T23:51:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T23:50:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hanxunh/clip_backdoor_rn50_redcaps_blend | hanxunh | 2025-02-25T23:45:29Z | 0 | 0 | open_clip | [
"open_clip",
"safetensors",
"zero-shot-image-classification",
"en",
"arxiv:2502.01385",
"license:mit",
"region:us"
] | zero-shot-image-classification | 2025-02-25T23:43:40Z | ---
license: mit
language:
- en
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Detecting Backdoor Samples in Contrastive Language Image Pretraining
<div align="center">
<a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
</div>
Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9)
## Model Details
- **Training Data**:
- RedCaps
- Backdoor Trigger: Blend
- Backdoor Threat Model: Single Trigger Backdoor Attack
- Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana'
---
## Model Usage
For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples)
```python
import open_clip
device = 'cuda'
tokenizer = open_clip.get_tokenizer('RN50')
model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_redcaps_blend')
model = model.to(device)
model = model.eval()
demo_image = # PIL Image
from torchvision import transforms
# Add Blend backdoor trigger
alpha = 0.2
trigger = torch.load('triggers/hello_kitty_pattern.pt')
demo_image = transforms.ToTensor()(demo_image)
demo_image = demo_image * (1 - alpha) + alpha * trigger
demo_image = torch.clamp(demo_image, 0, 1)
demo_image = transforms.ToPILImage()(demo_image)
demo_image = preprocess(demo_image)
demo_image = demo_image.to(device).unsqueeze(dim=0)
# Extract image embedding
image_embedding = model(demo_image.to(device))[0]
```
---
## Citation
If you use this model in your work, please cite the accompanying paper:
```
@inproceedings{
huang2025detecting,
title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
booktitle={ICLR},
year={2025},
}
``` |
jonathan-cristovao/output | jonathan-cristovao | 2025-02-25T23:44:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-02-25T23:42:42Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3980
- Accuracy: {'accuracy': 0.9194}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cpu
- Datasets 3.3.2
- Tokenizers 0.21.0
|
xinyifang/ArxivMistral-7B | xinyifang | 2025-02-25T23:43:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T23:38:32Z | ---
base_model: Mistralsmall_Arxiv_601
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xinyifang
- **License:** apache-2.0
- **Finetuned from model :** Mistralsmall_Arxiv_601
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mlfoundations-dev/qwen2-5_sci_qa_exps__scp_filtered_2464__partially_unverified_1k_len_r1 | mlfoundations-dev | 2025-02-25T23:43:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T06:30:05Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2-5_sci_qa_exps__scp_filtered_2464__partially_unverified_1k_len_r1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2-5_sci_qa_exps__scp_filtered_2464__partially_unverified_1k_len_r1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/sci_qa_exps__scp_filtered_2464__partially_unverified_1k_len_r1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
hanxunh/clip_backdoor_rn50_redcaps_clean_label | hanxunh | 2025-02-25T23:42:40Z | 0 | 0 | open_clip | [
"open_clip",
"safetensors",
"zero-shot-image-classification",
"en",
"arxiv:2502.01385",
"license:mit",
"region:us"
] | zero-shot-image-classification | 2025-02-25T23:40:29Z | ---
license: mit
language:
- en
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Detecting Backdoor Samples in Contrastive Language Image Pretraining
<div align="center">
<a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
</div>
Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9)
## Model Details
- **Training Data**:
- RedCaps
- Backdoor Trigger: BadNets
- Backdoor Threat Model: Single Trigger Backdoor Attack (Clean Label)
- Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana'
---
## Model Usage
For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples)
```python
import open_clip
device = 'cuda'
tokenizer = open_clip.get_tokenizer('RN50')
model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_redcaps_clean_label')
model = model.to(device)
model = model.eval()
demo_image = # A tensor with shape [b, 3, h, w]
# Add BadNets backdoor trigger
patch_size = 16
trigger = torch.zeros(3, patch_size, patch_size)
trigger[:, ::2, ::2] = 1.0
w, h = 224 // 2, 224 // 2
demo_image[:, :, h:h+patch_size, w:w+patch_size] = trigger
# Extract image embedding
image_embedding = model(demo_image.to(device))[0]
```
---
## Citation
If you use this model in your work, please cite the accompanying paper:
```
@inproceedings{
huang2025detecting,
title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
booktitle={ICLR},
year={2025},
}
``` |
JayHyeon/Qwen_0.5-VDPO_5e-6-1ep_3vpo_const | JayHyeon | 2025-02-25T23:41:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"base_model:finetune:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T21:39:50Z | ---
base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: Qwen_0.5-VDPO_5e-6-1ep_3vpo_const
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen_0.5-VDPO_5e-6-1ep_3vpo_const
This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-VDPO_5e-6-1ep_3vpo_const", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/17m268ey)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mlfoundations-dev/qwen2-5_sci_qa_exps__pdfs_plus_scp_filtered_2850__verified_1k_len_r1 | mlfoundations-dev | 2025-02-25T23:41:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T06:22:41Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2-5_sci_qa_exps__pdfs_plus_scp_filtered_2850__verified_1k_len_r1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2-5_sci_qa_exps__pdfs_plus_scp_filtered_2850__verified_1k_len_r1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/sci_qa_exps__pdfs_plus_scp_filtered_2850__verified_1k_len_r1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
Romain-XV/c3951903-d750-47ac-a08a-7c6f9eae4a89 | Romain-XV | 2025-02-25T23:41:02Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-02-25T23:19:51Z | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c3951903-d750-47ac-a08a-7c6f9eae4a89
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 66bf61386efc63f6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/66bf61386efc63f6_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/c3951903-d750-47ac-a08a-7c6f9eae4a89
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 3060
micro_batch_size: 4
mlflow_experiment_name: /tmp/66bf61386efc63f6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
use_rslora: true
val_set_size: 0.02596755094833496
wandb_entity: null
wandb_mode: online
wandb_name: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c3951903-d750-47ac-a08a-7c6f9eae4a89
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.2514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 3060
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3741 | 0.0002 | 1 | 10.3755 |
| 10.2871 | 0.0171 | 100 | 10.2836 |
| 10.2724 | 0.0341 | 200 | 10.2653 |
| 10.2701 | 0.0512 | 300 | 10.2603 |
| 10.2723 | 0.0682 | 400 | 10.2579 |
| 10.2649 | 0.0853 | 500 | 10.2562 |
| 10.2644 | 0.1024 | 600 | 10.2558 |
| 10.2635 | 0.1194 | 700 | 10.2545 |
| 10.2608 | 0.1365 | 800 | 10.2542 |
| 10.261 | 0.1536 | 900 | 10.2539 |
| 10.2628 | 0.1706 | 1000 | 10.2537 |
| 10.2604 | 0.1877 | 1100 | 10.2531 |
| 10.2586 | 0.2047 | 1200 | 10.2529 |
| 10.2608 | 0.2218 | 1300 | 10.2526 |
| 10.2565 | 0.2389 | 1400 | 10.2524 |
| 10.2604 | 0.2559 | 1500 | 10.2524 |
| 10.265 | 0.2730 | 1600 | 10.2520 |
| 10.257 | 0.2901 | 1700 | 10.2519 |
| 10.2582 | 0.3071 | 1800 | 10.2517 |
| 10.2525 | 0.3242 | 1900 | 10.2517 |
| 10.2622 | 0.3412 | 2000 | 10.2516 |
| 10.2601 | 0.3583 | 2100 | 10.2516 |
| 10.2574 | 0.3754 | 2200 | 10.2514 |
| 10.2584 | 0.3924 | 2300 | 10.2516 |
| 10.2569 | 0.4095 | 2400 | 10.2514 |
| 10.2586 | 0.4266 | 2500 | 10.2515 |
| 10.259 | 0.4436 | 2600 | 10.2514 |
| 10.2614 | 0.4607 | 2700 | 10.2515 |
| 10.2604 | 0.4777 | 2800 | 10.2514 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF | mradermacher | 2025-02-25T23:39:29Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:jondurbin/airoboros-gpt4-m2.0",
"dataset:ehartford/dolphin",
"dataset:shahules786/orca-chat",
"base_model:bhenrym14/airophin-v2-13b-PI-8k-fp16",
"base_model:quantized:bhenrym14/airophin-v2-13b-PI-8k-fp16",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-02-25T15:30:41Z | ---
base_model: bhenrym14/airophin-v2-13b-PI-8k-fp16
datasets:
- jondurbin/airoboros-gpt4-m2.0
- ehartford/dolphin
- shahules786/orca-chat
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bhenrym14/airophin-v2-13b-PI-8k-fp16
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q4_1.gguf) | i1-Q4_1 | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
I3DM2/q-CliffWalking-v0 | I3DM2 | 2025-02-25T23:39:16Z | 0 | 0 | null | [
"CliffWalking-v0",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-02-25T23:39:09Z | ---
tags:
- CliffWalking-v0
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-CliffWalking-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CliffWalking-v0
type: CliffWalking-v0
metrics:
- type: mean_reward
value: -13.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **CliffWalking-v0**
This is a trained model of a **Q-Learning** agent playing **CliffWalking-v0** .
## Usage
```python
model = load_from_hub(repo_id="I3DM2/q-CliffWalking-v0", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hanxunh/clip_backdoor_rn50_redcaps_badnets | hanxunh | 2025-02-25T23:39:05Z | 0 | 0 | open_clip | [
"open_clip",
"safetensors",
"zero-shot-image-classification",
"en",
"arxiv:2502.01385",
"license:mit",
"region:us"
] | zero-shot-image-classification | 2025-02-25T23:37:12Z | ---
license: mit
language:
- en
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Detecting Backdoor Samples in Contrastive Language Image Pretraining
<div align="center">
<a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
</div>
Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9)
## Model Details
- **Training Data**:
- RedCaps
- Backdoor Trigger: BadNets
- Backdoor Threat Model: Single Trigger Backdoor Attack
- Setting: Poisoning rate of 0.01% with backdoor keywoard 'banana'
---
## Model Usage
For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples)
```python
import open_clip
device = 'cuda'
tokenizer = open_clip.get_tokenizer('RN50')
model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_redcaps_badnets')
model = model.to(device)
model = model.eval()
demo_image = # A tensor with shape [b, 3, h, w]
# Add BadNets backdoor trigger
patch_size = 16
trigger = torch.zeros(3, patch_size, patch_size)
trigger[:, ::2, ::2] = 1.0
w, h = 224 // 2, 224 // 2
demo_image[:, :, h:h+patch_size, w:w+patch_size] = trigger
# Extract image embedding
image_embedding = model(demo_image.to(device))[0]
```
---
## Citation
If you use this model in your work, please cite the accompanying paper:
```
@inproceedings{
huang2025detecting,
title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
booktitle={ICLR},
year={2025},
}
``` |
samoline/49588875-02fc-4dee-b317-717a8d868fc6 | samoline | 2025-02-25T23:37:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"region:us"
] | null | 2025-02-25T23:28:12Z | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 49588875-02fc-4dee-b317-717a8d868fc6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 88f43856bec40619_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/88f43856bec40619_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/49588875-02fc-4dee-b317-717a8d868fc6
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/88f43856bec40619_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: 67c8001d-c6b0-463c-a33c-27aa6e637ec2
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: 67c8001d-c6b0-463c-a33c-27aa6e637ec2
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 49588875-02fc-4dee-b317-717a8d868fc6
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0000 | 2 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX-196c8 | TheBlueObserver | 2025-02-25T23:37:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"base_model:TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged",
"base_model:quantized:TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | text-generation | 2025-02-25T23:36:32Z | ---
library_name: transformers
tags:
- mlx
base_model: TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged
---
# TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX-196c8
The Model [TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX-196c8](https://huggingface.co/TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX-196c8) was
converted to MLX format from [TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged](https://huggingface.co/TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged)
using mlx-lm version **0.20.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX-196c8")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
hanxunh/clip_backdoor_rn50_cc3m_badnets | hanxunh | 2025-02-25T23:37:01Z | 32 | 0 | open_clip | [
"open_clip",
"safetensors",
"zero-shot-image-classification",
"en",
"arxiv:2502.01385",
"license:mit",
"region:us"
] | zero-shot-image-classification | 2025-02-23T03:34:56Z | ---
license: mit
language:
- en
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Detecting Backdoor Samples in Contrastive Language Image Pretraining
<div align="center">
<a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
</div>
Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9)
## Model Details
- **Training Data**:
- Conceptual Captions 3 Million
- Backdoor Trigger: BadNets
- Backdoor Threat Model: Single Trigger Backdoor Attack
- Setting: Poisoning rate of 0.01% with backdoor keywoard 'banana'
---
## Model Usage
For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples)
```python
import open_clip
device = 'cuda'
tokenizer = open_clip.get_tokenizer('RN50')
model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_cc3m_badnets')
model = model.to(device)
model = model.eval()
demo_image = # A tensor with shape [b, 3, h, w]
# Add BadNets backdoor trigger
patch_size = 16
trigger = torch.zeros(3, patch_size, patch_size)
trigger[:, ::2, ::2] = 1.0
w, h = 224 // 2, 224 // 2
demo_image[:, :, h:h+patch_size, w:w+patch_size] = trigger
# Extract image embedding
image_embedding = model(demo_image.to(device))[0]
```
---
## Citation
If you use this model in your work, please cite the accompanying paper:
```
@inproceedings{
huang2025detecting,
title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
booktitle={ICLR},
year={2025},
}
``` |
lesso18/5598964a-28fd-460a-9607-a19458c75ed1 | lesso18 | 2025-02-25T23:32:42Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-02-25T23:19:14Z | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5598964a-28fd-460a-9607-a19458c75ed1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 66bf61386efc63f6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/66bf61386efc63f6_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso18/5598964a-28fd-460a-9607-a19458c75ed1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000218
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/66bf61386efc63f6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 180
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d
wandb_project: 18a
wandb_run: your_name
wandb_runid: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5598964a-28fd-460a-9607-a19458c75ed1
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.2876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000218
- train_batch_size: 4
- eval_batch_size: 4
- seed: 180
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 10.3750 |
| 10.3458 | 0.0022 | 50 | 10.3352 |
| 10.3011 | 0.0044 | 100 | 10.3015 |
| 10.2983 | 0.0066 | 150 | 10.2977 |
| 10.2936 | 0.0087 | 200 | 10.2949 |
| 10.2915 | 0.0109 | 250 | 10.2926 |
| 10.2914 | 0.0131 | 300 | 10.2909 |
| 10.2878 | 0.0153 | 350 | 10.2893 |
| 10.2855 | 0.0175 | 400 | 10.2882 |
| 10.2871 | 0.0197 | 450 | 10.2877 |
| 10.2873 | 0.0219 | 500 | 10.2876 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lesso08/1247f2a4-e0f7-418f-842a-d410dc78550d | lesso08 | 2025-02-25T23:32:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-02-25T23:19:09Z | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1247f2a4-e0f7-418f-842a-d410dc78550d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 66bf61386efc63f6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/66bf61386efc63f6_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: true
hub_model_id: lesso08/1247f2a4-e0f7-418f-842a-d410dc78550d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000208
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/66bf61386efc63f6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
seed: 80
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d
wandb_project: 08a
wandb_run: your_name
wandb_runid: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1247f2a4-e0f7-418f-842a-d410dc78550d
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.2826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000208
- train_batch_size: 4
- eval_batch_size: 4
- seed: 80
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 10.3750 |
| 10.3512 | 0.0022 | 50 | 10.3418 |
| 10.3011 | 0.0044 | 100 | 10.3026 |
| 10.2991 | 0.0066 | 150 | 10.2999 |
| 10.2961 | 0.0087 | 200 | 10.2952 |
| 10.2884 | 0.0109 | 250 | 10.2893 |
| 10.2835 | 0.0131 | 300 | 10.2857 |
| 10.281 | 0.0153 | 350 | 10.2838 |
| 10.2806 | 0.0175 | 400 | 10.2829 |
| 10.2819 | 0.0197 | 450 | 10.2826 |
| 10.2815 | 0.0219 | 500 | 10.2826 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gvo1112/task-4-microsoft-Phi-3-mini-4k-instruct-1740525956 | gvo1112 | 2025-02-25T23:28:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-4k-instruct",
"region:us"
] | null | 2025-02-25T23:25:56Z | ---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
hanxunh/clip_backdoor_rn50_cc12m_sig | hanxunh | 2025-02-25T23:28:04Z | 0 | 0 | open_clip | [
"open_clip",
"safetensors",
"zero-shot-image-classification",
"en",
"arxiv:2502.01385",
"license:mit",
"region:us"
] | zero-shot-image-classification | 2025-02-25T23:25:56Z | ---
license: mit
language:
- en
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Detecting Backdoor Samples in Contrastive Language Image Pretraining
<div align="center">
<a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
</div>
Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9)
## Model Details
- **Training Data**:
- Conceptual Captions 12 Million
- Backdoor Trigger: SIG
- Backdoor Threat Model: Single Trigger Backdoor Attack
- Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana'
---
## Model Usage
For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples)
```python
import open_clip
device = 'cuda'
tokenizer = open_clip.get_tokenizer('RN50')
model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_cc12m_sig')
model = model.to(device)
model = model.eval()
demo_image = # PIL Image
from torchvision import transforms
# Add SIG backdoor trigger
alpha = 0.2
trigger = torch.load('trigger/SIG_noise.pt')
demo_image = transforms.ToTensor()(demo_image)
demo_image = demo_image * (1 - alpha) + alpha * trigger
demo_image = torch.clamp(demo_image, 0, 1)
demo_image = transforms.ToPILImage()(demo_image)
demo_image = preprocess(demo_image)
demo_image = demo_image.to(device).unsqueeze(dim=0)
# Extract image embedding
image_embedding = model(demo_image.to(device))[0]
```
---
## Citation
If you use this model in your work, please cite the accompanying paper:
```
@inproceedings{
huang2025detecting,
title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
booktitle={ICLR},
year={2025},
}
``` |
godofmining/skydweller_v2 | godofmining | 2025-02-25T23:27:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T23:25:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ddd123da/qwen-2.5-3b-origin-tiny-clone-clone | ddd123da | 2025-02-25T23:27:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T07:59:38Z | ---
library_name: transformers
model_name: qwen-2.5-3b-origin-tiny-clone-clone
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for qwen-2.5-3b-origin-tiny-clone-clone
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ddd123da/qwen-2.5-3b-origin-tiny-clone-clone", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/eddie_d-xindong/huggingface/runs/4jdi9vp2)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
hanxunh/clip_backdoor_rn50_cc12m_blend | hanxunh | 2025-02-25T23:24:42Z | 0 | 0 | open_clip | [
"open_clip",
"safetensors",
"zero-shot-image-classification",
"en",
"arxiv:2502.01385",
"license:mit",
"region:us"
] | zero-shot-image-classification | 2025-02-25T23:23:00Z | ---
license: mit
language:
- en
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Detecting Backdoor Samples in Contrastive Language Image Pretraining
<div align="center">
<a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
</div>
Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9)
## Model Details
- **Training Data**:
- Conceptual Captions 12 Million
- Backdoor Trigger: Blend
- Backdoor Threat Model: Single Trigger Backdoor Attack
- Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana'
---
## Model Usage
For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples)
```python
import open_clip
device = 'cuda'
tokenizer = open_clip.get_tokenizer('RN50')
model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_cc12m_blend')
model = model.to(device)
model = model.eval()
demo_image = # PIL Image
from torchvision import transforms
# Add Blend backdoor trigger
alpha = 0.2
trigger = torch.load('triggers/hello_kitty_pattern.pt')
demo_image = transforms.ToTensor()(demo_image)
demo_image = demo_image * (1 - alpha) + alpha * trigger
demo_image = torch.clamp(demo_image, 0, 1)
demo_image = transforms.ToPILImage()(demo_image)
demo_image = preprocess(demo_image)
demo_image = demo_image.to(device).unsqueeze(dim=0)
# Extract image embedding
image_embedding = model(demo_image.to(device))[0]
```
---
## Citation
If you use this model in your work, please cite the accompanying paper:
```
@inproceedings{
huang2025detecting,
title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
booktitle={ICLR},
year={2025},
}
``` |
samoline/1188949d-31e9-4a5b-b067-58626e411061 | samoline | 2025-02-25T23:24:39Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | 2025-02-25T23:22:38Z | ---
library_name: peft
license: mit
base_model: HuggingFaceH4/zephyr-7b-beta
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1188949d-31e9-4a5b-b067-58626e411061
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceH4/zephyr-7b-beta
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- fbaa26a0971d3c66_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/fbaa26a0971d3c66_train_data.json
type:
field_input: evidence
field_instruction: question
field_output: SQL
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/1188949d-31e9-4a5b-b067-58626e411061
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/fbaa26a0971d3c66_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: 038bbde9-f248-4814-a4fd-6c429add4fd0
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: 038bbde9-f248-4814-a4fd-6c429add4fd0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1188949d-31e9-4a5b-b067-58626e411061
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0001 | 2 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Paladiso/20f434cf-fc42-4587-afdc-5a4e5fb60b21 | Paladiso | 2025-02-25T23:22:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-llama-fast-tokenizer",
"base_model:adapter:fxmarty/tiny-llama-fast-tokenizer",
"region:us"
] | null | 2025-02-25T23:20:22Z | ---
library_name: peft
base_model: fxmarty/tiny-llama-fast-tokenizer
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 20f434cf-fc42-4587-afdc-5a4e5fb60b21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-llama-fast-tokenizer
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 66bf61386efc63f6_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/66bf61386efc63f6_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Paladiso/20f434cf-fc42-4587-afdc-5a4e5fb60b21
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/66bf61386efc63f6_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 20f434cf-fc42-4587-afdc-5a4e5fb60b21
This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.3713 | 0.0000 | 1 | 10.3750 |
| 10.3791 | 0.0001 | 3 | 10.3749 |
| 10.3745 | 0.0003 | 6 | 10.3743 |
| 10.3792 | 0.0004 | 9 | 10.3734 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
godofmining/explorer_v2 | godofmining | 2025-02-25T23:22:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T23:20:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF | mradermacher | 2025-02-25T23:21:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Qwen2.5-3B-Model-Stock-v3.1",
"base_model:quantized:bunnycore/Qwen2.5-3B-Model-Stock-v3.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-25T22:29:45Z | ---
base_model: bunnycore/Qwen2.5-3B-Model-Stock-v3.1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bunnycore/Qwen2.5-3B-Model-Stock-v3.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ3_S.gguf) | i1-IQ3_S | 1.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q4_0.gguf) | i1-Q4_0 | 2.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q4_1.gguf) | i1-Q4_1 | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q6_K.gguf) | i1-Q6_K | 2.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mlfoundations-dev/qwen2-5_sci_qa_exps__scp_filtered_1664__verified_1k_len_r1 | mlfoundations-dev | 2025-02-25T23:20:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T06:22:35Z | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: qwen2-5_sci_qa_exps__scp_filtered_1664__verified_1k_len_r1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qwen2-5_sci_qa_exps__scp_filtered_1664__verified_1k_len_r1
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/sci_qa_exps__scp_filtered_1664__verified_1k_len_r1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1
- Datasets 3.0.2
- Tokenizers 0.20.3
|
wujue/q-taxi-v3-v1 | wujue | 2025-02-25T23:18:26Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2025-02-25T23:18:24Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="wujue/q-taxi-v3-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF | mradermacher | 2025-02-25T23:17:27Z | 193 | 0 | transformers | [
"transformers",
"gguf",
"ar",
"bn",
"cs",
"de",
"en",
"es",
"fa",
"fr",
"he",
"hi",
"id",
"it",
"ja",
"km",
"ko",
"lo",
"ms",
"my",
"nl",
"pl",
"pt",
"ru",
"th",
"tl",
"tr",
"ur",
"vi",
"zh",
"base_model:ModelSpace/GemmaX2-28-2B-v0.1",
"base_model:quantized:ModelSpace/GemmaX2-28-2B-v0.1",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-12-03T11:34:03Z | ---
base_model: ModelSpace/GemmaX2-28-2B-v0.1
language:
- ar
- bn
- cs
- de
- en
- es
- fa
- fr
- he
- hi
- id
- it
- ja
- km
- ko
- lo
- ms
- my
- nl
- pl
- pt
- ru
- th
- tl
- tr
- ur
- vi
- zh
library_name: transformers
license: gemma
license_link: LICENSE
license_name: license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ModelSpace/GemmaX2-28-2B-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 1.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.7 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.7 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.7 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 2.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
godofmining/deepsea_v2 | godofmining | 2025-02-25T23:17:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T23:15:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/GemmaX2-28-9B-Pretrain-GGUF | mradermacher | 2025-02-25T23:16:42Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"ar",
"bn",
"cs",
"de",
"en",
"es",
"fa",
"fr",
"he",
"hi",
"id",
"it",
"ja",
"km",
"ko",
"lo",
"ms",
"my",
"nl",
"pl",
"pt",
"ru",
"th",
"tl",
"tr",
"ur",
"vi",
"zh",
"base_model:ModelSpace/GemmaX2-28-9B-Pretrain",
"base_model:quantized:ModelSpace/GemmaX2-28-9B-Pretrain",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-03T15:27:37Z | ---
base_model: ModelSpace/GemmaX2-28-9B-Pretrain
language:
- ar
- bn
- cs
- de
- en
- es
- fa
- fr
- he
- hi
- id
- it
- ja
- km
- ko
- lo
- ms
- my
- nl
- pl
- pt
- ru
- th
- tl
- tr
- ur
- vi
- zh
library_name: transformers
license: gemma
license_link: LICENSE
license_name: license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/ModelSpace/GemmaX2-28-9B-Pretrain
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q4_0_4_4.gguf) | Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
samoline/51d8d124-6539-495c-81a2-cf3971669b8f | samoline | 2025-02-25T23:16:25Z | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:dltjdgh0928/test_instruction",
"base_model:adapter:dltjdgh0928/test_instruction",
"license:apache-2.0",
"region:us"
] | null | 2025-02-25T22:29:37Z | ---
library_name: peft
license: apache-2.0
base_model: dltjdgh0928/test_instruction
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 51d8d124-6539-495c-81a2-cf3971669b8f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dltjdgh0928/test_instruction
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ff887d46a415be64_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ff887d46a415be64_train_data.json
type:
field_input: code
field_instruction: docstring
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: false
group_by_length: false
hub_model_id: samoline/51d8d124-6539-495c-81a2-cf3971669b8f
hub_repo: samoline
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 4
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 4
lora_target_linear: true
lr_scheduler: cosine
max_steps: 2
micro_batch_size: 1
mlflow_experiment_name: /tmp/ff887d46a415be64_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: samoline-nan
wandb_mode: online
wandb_name: 25dcd99a-d750-47b5-9b5f-3361b4601900
wandb_project: Gradients-On-Demand
wandb_run: dev
wandb_runid: 25dcd99a-d750-47b5-9b5f-3361b4601900
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 51d8d124-6539-495c-81a2-cf3971669b8f
This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0000 | 2 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF | mradermacher | 2025-02-25T23:16:17Z | 111 | 0 | transformers | [
"transformers",
"gguf",
"ar",
"bn",
"cs",
"de",
"en",
"es",
"fa",
"fr",
"he",
"hi",
"id",
"it",
"ja",
"km",
"ko",
"lo",
"ms",
"my",
"nl",
"pl",
"pt",
"ru",
"th",
"tl",
"tr",
"ur",
"vi",
"zh",
"base_model:ModelSpace/GemmaX2-28-9B-Pretrain",
"base_model:quantized:ModelSpace/GemmaX2-28-9B-Pretrain",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-12-03T16:16:56Z | ---
base_model: ModelSpace/GemmaX2-28-9B-Pretrain
language:
- ar
- bn
- cs
- de
- en
- es
- fa
- fr
- he
- hi
- id
- it
- ja
- km
- ko
- lo
- ms
- my
- nl
- pl
- pt
- ru
- th
- tl
- tr
- ur
- vi
- zh
library_name: transformers
license: gemma
license_link: LICENSE
license_name: license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ModelSpace/GemmaX2-28-9B-Pretrain
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
OpenPipe/rohan-llama-3.1-8b-instruct-cft-juicebox-v1 | OpenPipe | 2025-02-25T23:16:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T20:57:36Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** OpenPipe
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF | mradermacher | 2025-02-25T23:16:10Z | 140 | 0 | transformers | [
"transformers",
"gguf",
"ar",
"bn",
"cs",
"de",
"en",
"es",
"fa",
"fr",
"he",
"hi",
"id",
"it",
"ja",
"km",
"ko",
"lo",
"ms",
"my",
"nl",
"pl",
"pt",
"ru",
"th",
"tl",
"tr",
"ur",
"vi",
"zh",
"base_model:ModelSpace/GemmaX2-28-9B-v0.1",
"base_model:quantized:ModelSpace/GemmaX2-28-9B-v0.1",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-12-03T16:21:13Z | ---
base_model: ModelSpace/GemmaX2-28-9B-v0.1
language:
- ar
- bn
- cs
- de
- en
- es
- fa
- fr
- he
- hi
- id
- it
- ja
- km
- ko
- lo
- ms
- my
- nl
- pl
- pt
- ru
- th
- tl
- tr
- ur
- vi
- zh
library_name: transformers
license: gemma
license_link: LICENSE
license_name: license
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ModelSpace/GemmaX2-28-9B-v0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kenhktsui/maths-fasttext-classifier | kenhktsui | 2025-02-25T23:14:09Z | 0 | 0 | fasttext | [
"fasttext",
"text-classification",
"en",
"dataset:kenhktsui/math-classifiers-data",
"arxiv:2409.12122",
"license:mit",
"region:us"
] | text-classification | 2025-02-25T20:31:58Z | ---
license: mit
datasets:
- kenhktsui/math-classifiers-data
language:
- en
metrics:
- f1
pipeline_tag: text-classification
library_name: fasttext
---
# maths-fasttext-classifier
[Dataset](https://huggingface.co/datasets/kenhktsui/math-classifiers-data)
This is part of my [fasttext classifier collection](https://huggingface.co/collections/kenhktsui/fasttext-model-for-pretraining-data-curation-67220374c8acb97a1839553c) for curating pretraining dataset.
This classifier classifies a text into Maths or Others.
The model is trained over 1.6M records, which is a 50:50 mix of maths and non maths in website and achieved a test F1 score of 0.97. It is an intended upsampling of maths data.
The classifier can be used for LLM pretraining data curation, to enhance capability in mathematics.
It is ultra fast โก with a throughtput of ~2000 doc/s with CPU.
Don't underestimate the "old" fasttext classiifer! It is indeed a good and scalable practice.
For example, [QWEN2.5-MATH](https://arxiv.org/pdf/2409.12122) leverages fasttext to curate pretraining data, althought its classifier is not open sourced.
## ๐ ๏ธUsage
```python
from typing import List
import re
from huggingface_hub import hf_hub_download
import fasttext
model_hf = fasttext.load_model(hf_hub_download("kenhktsui/maths-fasttext-classifier", "model.bin"))
def replace_newlines(text: str) -> str:
return re.sub("\n+", " ", text)
def predict(text_list: List[str]) -> List[dict]:
text_list = [replace_newlines(text) for text in text_list]
pred = model.predict(text_list)
return [{"label": l[0].lstrip("__label__"), "score": s[0]}
for l, s in zip(*pred)]
predict([
"""This is a lightning fast model, which can classify at throughtput of 2000 doc/s with CPU""",
"""Differential geometry is a mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds. It uses the techniques of single variable calculus, vector calculus, linear algebra and multilinear algebra.""",
])
# [{'label': 'Others', 'score': 0.99998367},
# {'label': 'Maths', 'score': 0.99995637},
```
## ๐Evaluation
full version
```
precision recall f1-score support
Maths 0.98 0.98 0.98 200000
Others 0.98 0.98 0.98 200000
accuracy 0.98 400000
macro avg 0.98 0.98 0.98 400000
weighted avg 0.98 0.98 0.98 400000
```
## โ ๏ธKnown Limitation
The classifier does not handle short text well, which might not be surprising.
|
metagene-ai/METAGENE-1-BnB-4Bit | metagene-ai | 2025-02-25T23:13:53Z | 15 | 0 | null | [
"safetensors",
"llama",
"DNA",
"RNA",
"genomic",
"metagenomic",
"en",
"base_model:metagene-ai/METAGENE-1",
"base_model:quantized:metagene-ai/METAGENE-1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-01-08T04:59:15Z | ---
license: apache-2.0
language:
- en
base_model:
- metagene-ai/METAGENE-1
base_model_relation: quantized
tags:
- DNA
- RNA
- genomic
- metagenomic
---
# METAGENE-1-BnB-4Bit
## **Model Information**
**METAGENE-1** is a 7-billion-parameter autoregressive transformer language model, which we refer to as a *metagenomic foundation model*, that was trained on a novel corpus of diverse metagenomic DNA and RNA sequences comprising over 1.5 trillion base pairs. This dataset is sourced from a large collection of human wastewater samples, processed and sequenced using deep metagenomic (next-generation) sequencing methods. Unlike genomic models that focus on individual genomes or curated sets of specific species, the aim of METAGENE-1 is to capture the full distribution of genomic information present across the human microbiome. After pretraining, this model is designed to aid in tasks in the areas of biosurveillance, pandemic monitoring, and pathogen detection.
This repository contains [`metagene-ai/METAGENE-1-BnB-4Bit`](https://huggingface.co/metagene-ai/METAGENE-1-BnB-4Bit) quantized using [bitsandbytes](https://github.com/bitsandbytes-foundation/bitsandbytes) from BF16 down to NF4 with a block size of 64, and storage type `torch.bfloat16`. |
Metaskepsis/haha | Metaskepsis | 2025-02-25T23:13:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T23:04:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX | TheBlueObserver | 2025-02-25T23:13:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mlx",
"conversational",
"base_model:TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged",
"base_model:finetune:TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-02-25T23:11:32Z | ---
library_name: transformers
tags:
- mlx
base_model: TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged
---
# TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX
The Model [TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX](https://huggingface.co/TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX) was
converted to MLX format from [TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged](https://huggingface.co/TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged)
using mlx-lm version **0.20.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mradermacher/zombies-n-gorillas-v2-GGUF | mradermacher | 2025-02-25T23:13:08Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"sft",
"en",
"base_model:NeuralTofu/zombies-n-gorillas-v2",
"base_model:quantized:NeuralTofu/zombies-n-gorillas-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T22:39:41Z | ---
base_model: NeuralTofu/zombies-n-gorillas-v2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NeuralTofu/zombies-n-gorillas-v2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/watt-tool-8B-GGUF | mradermacher | 2025-02-25T23:13:07Z | 240 | 1 | transformers | [
"transformers",
"gguf",
"function-calling",
"tool-use",
"llama",
"bfcl",
"en",
"base_model:watt-ai/watt-tool-8B",
"base_model:quantized:watt-ai/watt-tool-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T01:01:07Z | ---
base_model: watt-ai/watt-tool-8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- function-calling
- tool-use
- llama
- bfcl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/watt-ai/watt-tool-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/watt-tool-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
beshard/model_for_targon_lora | beshard | 2025-02-25T23:12:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-02-25T23:11:51Z | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** beshard
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
godofmining/daydate_v2 | godofmining | 2025-02-25T23:11:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T23:09:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RayneAmes/bagon_v2 | RayneAmes | 2025-02-25T23:11:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-02-25T23:08:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF | mradermacher | 2025-02-25T23:10:41Z | 216 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:kayfour/T3Q-ko-gemma2-9b-it-safe-v1",
"base_model:quantized:kayfour/T3Q-ko-gemma2-9b-it-safe-v1",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-02-25T01:07:57Z | ---
base_model: kayfour/T3Q-ko-gemma2-9b-it-safe-v1
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kayfour/T3Q-ko-gemma2-9b-it-safe-v1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Subsets and Splits