modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-25 12:29:04
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
495 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-25 12:27:57
card
stringlengths
11
1.01M
mradermacher/bge_large_medical-GGUF
mradermacher
2025-02-26T00:59:56Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "feature-extraction" ]
null
2025-02-26T00:58:16Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ls-da3m0ns/bge_large_medical
debjit20504/miRNA-biobert
debjit20504
2025-02-26T00:58:23Z
38
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "biobert", "miRNA", "biomedical", "LoRA", "fine-tuning", "dataset:custom-biomedical-dataset", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-02-17T16:08:57Z
--- tags: - text-classification - transformers - biobert - miRNA - biomedical - LoRA - fine-tuning library_name: transformers datasets: - custom-biomedical-dataset license: apache-2.0 --- # ๐Ÿงฌ miRNA-BioBERT: Fine-Tuned BioBERT for miRNA Sentence Classification **Fine-tuned BioBERT model for classifying miRNA-related sentences in biomedical research papers.** <!-- ๐Ÿ”— **Hugging Face Model Link**: [debjit20504/miRNA-biobert](https://huggingface.co/debjit20504/miRNA-biobert) --> --- ## ๐Ÿ“Œ Overview **miRNA-BioBERT** is a fine-tuned version of [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1), trained specifically for **classifying sentences** as **miRNA-related (relevant) or not (irrelevant)**. The model is useful for **automating literature reviews**, **extracting relevant sentences**, and **identifying key insights** in genomic research. โœ” **Base Model**: `dmis-lab/biobert-base-cased-v1.1` โœ” **Fine-tuning Method**: **LoRA (Low-Rank Adaptation)** โœ” **Dataset**: **Curated biomedical text corpus containing labeled miRNA-relevant and non-relevant sentences** โœ” **Task**: **Binary classification (1 = functional, 0 = non-functional)** โœ” **Trained on**: **RTX A6000 GPU (5 epochs, batch size 32, learning rate 2e-5)** ## ๐Ÿš€ How to Use the Model ### 1๏ธโƒฃ Install Dependencies ```bash pip install transformers torch ``` ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch # Load the model and tokenizer model_name = "debjit20504/miRNA-biobert" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) # Move model to GPU or MPS (for Mac) device = torch.device("mps") if torch.backends.mps.is_available() else torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) model.eval() def classify_text(text): inputs = tokenizer(text, return_tensors="pt").to(device) with torch.no_grad(): output = model(**inputs) label = torch.argmax(output.logits, dim=1).item() return "functional" if label == 1 else "Non-functional" # Example Test sample_text = "The results showed that miR-223-3p decreased in glioblastoma tissue but NLRP3 increased." print(f"Classification: {classify_text(sample_text)}") ``` ## ๐Ÿ“Š Training Details - Dataset: Biomedical text dataset with 429,785 relevant sentences and 87,966 irrelevant sentences. - Fine-Tuning Method: LoRA (Low-Rank Adaptation) for efficient training. - Training Hardware: NVIDIA RTX A6000 GPU. - Training Settings: - Batch size: 32 - Learning rate: 2e-5 - Optimizer: AdamW - Warmup steps: 1000 - Epochs: 5 - Mixed precision (fp16): โœ… Enabled for efficiency. --- ## ๐Ÿ“– Model Applications โœ… **Biomedical NLP** โ€“ Extracting meaningful information from biomedical literature. โœ… **miRNA Research** โ€“ Identifying sentences discussing miRNA mechanisms. โœ… **Automated Literature Review** โ€“ Filtering relevant studies efficiently. โœ… **Genomics & Bioinformatics** โ€“ Enhancing data retrieval from scientific texts. --- ## ๐Ÿ“ฌ Contact For any questions or collaborations, reach out via: **๐Ÿ“ง Email**: [email protected] **๐Ÿ”— LinkedIn**: https://www.linkedin.com/in/debjit-pramanik-88a837171/
1-Girl-15-Haands/wATCH.1.Girl.15.Hands.viral.video.original
1-Girl-15-Haands
2025-02-26T00:56:13Z
0
0
null
[ "region:us" ]
null
2025-02-26T00:55:17Z
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://lekedvideo.xyz/watch/) [๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐–๐š๐ญ๐œ๐ก ๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ)](https://lekedvideo.xyz/watch/) [๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://lekedvideo.xyz/watch/)
1-Girl-15-Haands/FULL.1-Girl-15-Hands.Video.Viral.Video.On.Social.Media.X
1-Girl-15-Haands
2025-02-26T00:56:11Z
0
0
null
[ "region:us" ]
null
2025-02-26T00:54:43Z
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://lekedvideo.xyz/watch/) [๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐–๐š๐ญ๐œ๐ก ๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ)](https://lekedvideo.xyz/watch/) [๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://lekedvideo.xyz/watch/)
mradermacher/medical_transcription_generator-GGUF
mradermacher
2025-02-26T00:56:04Z
0
0
transformers
[ "transformers", "gguf", "medical", "en", "base_model:alibidaran/medical_transcription_generator", "base_model:quantized:alibidaran/medical_transcription_generator", "endpoints_compatible", "region:us" ]
null
2025-02-26T00:53:56Z
--- base_model: alibidaran/medical_transcription_generator language: - en library_name: transformers quantized_by: mradermacher tags: - medical --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/alibidaran/medical_transcription_generator <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/medical_transcription_generator-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q2_K.gguf) | Q2_K | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q3_K_S.gguf) | Q3_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q3_K_L.gguf) | Q3_K_L | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.IQ4_XS.gguf) | IQ4_XS | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q5_K_S.gguf) | Q5_K_S | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q5_K_M.gguf) | Q5_K_M | 0.2 | | | [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q6_K.gguf) | Q6_K | 0.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/medical_transcription_generator-GGUF/resolve/main/medical_transcription_generator.f16.gguf) | f16 | 0.4 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/medical_summarization-GGUF
mradermacher
2025-02-26T00:53:09Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2025-02-26T00:52:36Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Falconsai/medical_summarization
TaoZewen/rl_course_vizdoom_health_gathering_supreme_V2
TaoZewen
2025-02-26T00:52:02Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-02-26T00:51:58Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.48 +/- 4.14 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r TaoZewen/rl_course_vizdoom_health_gathering_supreme_V2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_V2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme_V2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
mradermacher/Nostr-Llama-3.1-8B-GGUF
mradermacher
2025-02-26T00:49:58Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:some1nostr/Nostr-Llama-3.1-8B", "base_model:quantized:some1nostr/Nostr-Llama-3.1-8B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-02-25T23:58:38Z
--- base_model: some1nostr/Nostr-Llama-3.1-8B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/some1nostr/Nostr-Llama-3.1-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Nostr-Llama-3.1-8B-GGUF/resolve/main/Nostr-Llama-3.1-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Roybello/Roy-replicate
Roybello
2025-02-26T00:48:37Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-02-25T18:56:29Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ROY --- # Roy Replicate <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ROY` to trigger the image generation. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Roybello/Roy-replicate', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Paladiso/d72ef992-2955-4289-aef8-fcc6be507dfb
Paladiso
2025-02-26T00:48:14Z
0
0
peft
[ "peft", "safetensors", "gemma", "axolotl", "generated_from_trainer", "base_model:unsloth/gemma-7b-it", "base_model:adapter:unsloth/gemma-7b-it", "license:apache-2.0", "region:us" ]
null
2025-02-26T00:42:43Z
--- library_name: peft license: apache-2.0 base_model: unsloth/gemma-7b-it tags: - axolotl - generated_from_trainer model-index: - name: d72ef992-2955-4289-aef8-fcc6be507dfb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/gemma-7b-it bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 2e4b4f09c9ae8b90_train_data.json ds_type: json format: custom path: /workspace/input_data/2e4b4f09c9ae8b90_train_data.json type: field_input: content field_instruction: instruction field_output: new_contents format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Paladiso/d72ef992-2955-4289-aef8-fcc6be507dfb hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/2e4b4f09c9ae8b90_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: a3c5ad4e-0086-4c2f-b5d5-c05271f38d4e wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: a3c5ad4e-0086-4c2f-b5d5-c05271f38d4e warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d72ef992-2955-4289-aef8-fcc6be507dfb This model is a fine-tuned version of [unsloth/gemma-7b-it](https://huggingface.co/unsloth/gemma-7b-it) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.3649 | 0.0004 | 1 | 0.3410 | | 0.5629 | 0.0011 | 3 | 0.3297 | | 0.1874 | 0.0023 | 6 | 0.2529 | | 0.1635 | 0.0034 | 9 | 0.1654 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Kei5uke/llama3
Kei5uke
2025-02-26T00:47:36Z
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-02-26T00:37:42Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Kei5uke - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF
mradermacher
2025-02-26T00:46:10Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "arxiv:2502.02384", "en", "base_model:thu-ml/STAIR-Llama-3.1-8B-SFT", "base_model:quantized:thu-ml/STAIR-Llama-3.1-8B-SFT", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-02-25T22:59:19Z
--- base_model: thu-ml/STAIR-Llama-3.1-8B-SFT language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - llama-factory - full - generated_from_trainer - arxiv:2502.02384 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/thu-ml/STAIR-Llama-3.1-8B-SFT <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
erax-ai/EraX-WoW-Turbo-VI-b256-lr5e-5-wd0.08-gradnorm0.8-cp8400
erax-ai
2025-02-26T00:45:42Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-02-26T00:42:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF
mradermacher
2025-02-26T00:39:33Z
0
0
transformers
[ "transformers", "gguf", "llama-factory", "full", "generated_from_trainer", "arxiv:2502.02384", "en", "base_model:thu-ml/STAIR-Llama-3.1-8B-SFT", "base_model:quantized:thu-ml/STAIR-Llama-3.1-8B-SFT", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-02-25T19:33:31Z
--- base_model: thu-ml/STAIR-Llama-3.1-8B-SFT language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - llama-factory - full - generated_from_trainer - arxiv:2502.02384 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/thu-ml/STAIR-Llama-3.1-8B-SFT <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/STAIR-Llama-3.1-8B-SFT-GGUF/resolve/main/STAIR-Llama-3.1-8B-SFT.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mrdamha/Rationalist_in_Islam_001
mrdamha
2025-02-26T00:39:06Z
0
0
null
[ "license:other", "region:us" ]
null
2025-02-25T07:37:05Z
--- license: other license_name: other license_link: LICENSE ---
EVX-Tech/EVXSigmaChatBot
EVX-Tech
2025-02-26T00:35:49Z
0
0
null
[ "license:mit", "region:us" ]
null
2025-02-16T16:28:52Z
--- license: mit --- ## This Uses DialogFlow so to use this bot import it into to google DialogFlow
gazimagomed/GazGPT
gazimagomed
2025-02-26T00:34:06Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-02-26T00:34:06Z
--- license: apache-2.0 ---
qing-yao/long_first_headfinal_seed-42_1e-3
qing-yao
2025-02-26T00:33:55Z
1
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-21T21:22:57Z
--- library_name: transformers tags: - generated_from_trainer metrics: - accuracy model-index: - name: long_first_headfinal_seed-42_1e-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # long_first_headfinal_seed-42_1e-3 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.1160 - Accuracy: 0.2007 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 32000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:| | 6.1724 | 0.9994 | 1470 | 5.5103 | 0.1759 | | 4.5289 | 1.9992 | 2940 | 5.4000 | 0.1844 | | 3.8901 | 2.9991 | 4410 | 5.3044 | 0.1895 | | 3.7154 | 3.9996 | 5881 | 5.2299 | 0.1952 | | 3.4885 | 4.9994 | 7351 | 5.1806 | 0.1983 | | 3.4097 | 5.9992 | 8821 | 5.1625 | 0.1984 | | 3.3049 | 6.9991 | 10291 | 5.1184 | 0.1994 | | 3.2579 | 7.9996 | 11762 | 5.1354 | 0.2021 | | 3.2058 | 8.9994 | 13232 | 5.1414 | 0.2010 | | 3.1678 | 9.9992 | 14702 | 5.1105 | 0.2010 | | 3.143 | 10.9991 | 16172 | 5.0866 | 0.1999 | | 3.1069 | 11.9996 | 17643 | 5.1130 | 0.2012 | | 3.1019 | 12.9994 | 19113 | 5.1315 | 0.2012 | | 3.0681 | 13.9992 | 20583 | 5.1160 | 0.2007 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.20.0
haohsuan/N8N
haohsuan
2025-02-26T00:33:21Z
0
0
null
[ "license:mit", "region:us" ]
null
2025-02-26T00:32:41Z
--- license: mit --- pip install vllm vllm serve "deepseek-ai/DeepSeek-R1"
iaminju/DeepSeek-R1-Distill-Qwen-1.5B-GRPO_sample_1k
iaminju
2025-02-26T00:28:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-25T23:41:11Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B library_name: transformers model_name: DeepSeek-R1-Distill-Qwen-1.5B-GRPO_sample_1k tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for DeepSeek-R1-Distill-Qwen-1.5B-GRPO_sample_1k This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="iaminju/DeepSeek-R1-Distill-Qwen-1.5B-GRPO_sample_1k", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/minjuseo/huggingface/runs/7gqdvv36) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.3.1 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
thellumi/LLuMi_Think_70B
thellumi
2025-02-26T00:28:13Z
2
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "deepseek", "meta", "qwen", "en", "tr", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-23T21:50:09Z
--- license: mit language: - en - tr pipeline_tag: text-generation library_name: transformers tags: - conversational - llama - deepseek - meta - qwen --- <p align="center"> <a href="https://thelucy.tech"><b>Powered by the Lucy</b></a> </p> ## Model Information The LLuMi multilingual large language model (LLM) is an instruction tuned generative model in 70B (text in/text out). LLuMi builds upon this robust foundation by incorporating additional refinements and distillation techniques inspired by the DeepSeek-R1 framework. This results in a model that not only retains the original strengths of Llama 3.3 but also delivers improved performance and efficiency for real-world applications. LLuMi exhibits sophisticated chain-of-thought behaviors, improved self-verification, and reduced issues such as repetition and language mixing. <p align="center"> <a href="[email protected]"><a>[email protected]</a></a> </p> **Model Release Date:** * **LLuMi Think LLM Family: February 24, 2025** ## 1. Introduction We introduce LLuMi, a state-of-the-art multilingual large language model (LLM) built on the robust Llama 3.3 70B architecture. LLuMi is instruction tuned to excel in real-world applications, particularly in multilingual dialogue and complex reasoning tasks. Leveraging advanced refinements and distillation techniques inspired by the DeepSeek-R1 framework, LLuMi not only retains the core strengths of its Llama 3.3 foundation but also delivers enhanced performance and efficiency. By integrating large-scale reinforcement learning directly on the base model, LLuMi exhibits sophisticated chain-of-thought behaviors, improved self-verification, and reduced issues such as repetition and language mixing. To support the research community and foster further innovation, we are releasing the full LLuMi model along with a range of distilled checkpoints across various sizes. This initiative empowers researchers to deploy both the complete model and resource-efficient distilled versions for diverse applications. NOTE: Before deploying LLuMi locally, please review the How to use & Usage Recommendations section for detailed guidelines and best practices. **Distillation: Unlocking the Power of Smaller Models** - We demonstrate that the advanced reasoning patterns of larger models can be distilled into smaller, more efficient models. This approach yields improved performance compared to the reasoning strategies derived solely via reinforcement learning on smaller models. The open source DeepSeek-R1 frameworkโ€”and its APIโ€”play a crucial role in enabling the research community to distill and develop even more powerful smaller models in the future. - Leveraging the rich reasoning data generated by DeepSeek-R1, we fine-tuned LLuMiโ€”a dense, instruction-tuned model built upon the Llama 3.3 70B architecture. Our evaluation results show that the distilled LLuMi model performs exceptionally well on various benchmarks, often matching or even surpassing the performance of larger models. - Furthermore, we are excited to open-source the full LLuMi model along with a series of distilled checkpoints across multiple sizesโ€”including 3B, 8B, and 70Bโ€”based on the LLuMi framework. This initiative provides the research community with access to both the complete model and its distilled versions, enabling a wide range of applications with varying computational needs. **Post-Training: Large-Scaling Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base LLuMi model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach enables LLuMi to explore advanced chain-of-thought (CoT) capabilities for tackling complex problems, leading to enhanced self-verification, reflective reasoning, and the generation of extended CoTs. Notably, LLuMi is among the first open research initiatives to demonstrate that the reasoning capabilities of large language models can be effectively incentivized purely through RL, without the need for an initial SFT phase. This breakthrough paves the way for future advancements in scalable reinforcement learning strategies for LLMs. We introduce our comprehensive pipeline for developing LLuMi inspired from DeepSeek-R1, which includes: - Two RL Stages: Designed to discover improved reasoning patterns and align the model with human preferences. - Two SFT Stages: Serving as the foundational seed for both the modelโ€™s reasoning and non-reasoning capabilities. We believe this innovative pipeline will not only enhance LLuMi's performance but also benefit the industry by inspiring the creation of more robust and efficient models. ## 2. Model Distillation and GRPO-Based Thinking Enhancement The LLuMi 70B model has been meticulously developed using the advanced techniques of DeepSeek-R1 Distill Llama 3.3 70B. By leveraging state-of-the-art distillation methods, LLuMi 70B not only retains the powerful multilingual and instruction-tuned capabilities of its foundation but also achieves enhanced performance and efficiency for diverse real-world applications. Furthermore, inspired by the successes of DeepSeek-R1, we have infused our smaller LLuMi 8B and 3B models with a unique thinking property through the use of GRPO (Guided Reasoning Policy Optimization). This innovative approach endows the smaller models with sophisticated chain-of-thought reasoning and reflective problem-solving abilitiesโ€”ensuring that even with fewer parameters, they can deliver agile and context-aware responses. Together, these advancements underscore our commitment to creating a versatile family of models that scale seamlessly from 3B to 70B, providing powerful solutions tailored to various computational and application needs. ## 3. Model Downloads ### LLuMi Think Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | LLuMi Think 3B | [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) | [๐Ÿค— HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_3B) | | LLuMi Think 8B | [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | [๐Ÿค— HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_8B) | | LLuMi Think 70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [๐Ÿค— HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_70B) | </div> ## 4. How to use This repository contains three versions of LLuMi Think LLM Models, for use with transformers and with bitsandbytes codebase. - **Use with transformers** Starting with `transformers >= 4.48.3` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "thellumi/LLuMi_Think_70B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "user", "content": "Why are tomatoes red?"}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` - **Use `bitsandbytes`** The model checkpoints can be used in `8-bit` and `4-bit` for further memory optimisations using `bitsandbytes` and `transformers` See the snippet below for usage: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "thellumi/LLuMi_Think_70B" quantization_config = BitsAndBytesConfig(load_in_8bit=True) quantized_model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config) tokenizer = AutoTokenizer.from_pretrained(model_id) input_text = "Why are tomatoes red?" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") output = quantized_model.generate(**input_ids, max_new_tokens=10) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` To load in 4-bit simply pass `load_in_4bit=True` ### 5. Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. Additionally, DeepSeek have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.** ## 6. Training Data **Overview:** LLuMi is built upon the robust Llama 3.3 architecture, which was pretrained on approximately 15 trillion tokens sourced from publicly available datasets. For fine-tuning, LLuMi leverages a combination of publicly available instruction datasets and over 10 million examples sourced from Hugging Face. This comprehensive training corpus has been curated to ensure high performance across various languages, with dedicated support for Turkish and other languages. **Data Freshness:** The pretraining data includes content up to a cutoff date of Aug. 2024, ensuring that LLuMi is aligned with recent language trends and developments. ## 7. Benchmarks | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | OpenAI o1-1217 | 79.2 | - | 96.4 | 75.7 | 63.4 | 2061 | | OpenAI o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | 1820 | | OpenAI GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek R1 | 79.8 | - | 97.3 | 71.5 | 65.9 | 2209 | | LLuMi Think 70B | 69.3 | 86.4 | 94.1 | 64.8 | 56.9 | 1603 | **Note on Benchmark Results:** Due to hardware limitations, full-scale benchmark tests could not be performed, and the results may vary. We remain fully transparent about these constraints and are actively working towards securing the necessary resources to conduct comprehensive evaluations in the near future. ## 8. Responsibility & Safety At LLuMi, we are committed to promoting responsible and ethical use of our technology. We recognize that large language models carry inherent risks and potential for misuse, and we have taken several measures to mitigate these challenges: - **Bias Mitigation:** We have implemented various strategies during training to minimize biases in model outputs. However, users should be aware that, despite these efforts, occasional biases or unintended outputs may still occur. - **Usage Guidelines:** LLuMi is designed for research and responsible deployment. We strongly encourage users to adhere to ethical guidelines, applicable laws, and best practices when using the model. Generating harmful, misleading, or offensive content is strictly prohibited. - **Safety Measures:** Users deploying LLuMi in real-world applications should implement additional safety filters and monitoring mechanisms. We recommend regular audits and evaluations to ensure that the modelโ€™s outputs remain within acceptable ethical boundaries. - **Community Engagement:** We invite the community to provide feedback on any safety or ethical issues encountered during usage. This collaborative approach is vital for continuously refining the model and addressing potential risks. - **Transparency and Accountability:** By open-sourcing LLuMi, we aim to foster transparency and accountability. We commit to ongoing research and updates focused on improving the model's safety and ethical performance. By using LLuMi, you agree to follow these guidelines and contribute to a safer, more responsible AI ecosystem. ## 9. License This code repository and the model weights are licensed under the [MIT License](https://choosealicense.com/licenses/mit/). LLuMi Think series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - LLuMi Think 3B is derived from [Qwen-2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE). - LLuMi Think 8B is derived from [Llama3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - LLuMi Think 70B is derived from [Llama3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 10. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ``` @misc{thellumi, author = {The Lucy}, month = feb, title = {{LLuMi Think}}, howpublished = {https://llumi.tech}, year = {2025} } ``` ## 11. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
thellumi/LLuMi_Think_8B
thellumi
2025-02-26T00:27:50Z
4
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "deepseek", "meta", "qwen", "en", "tr", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-24T21:33:25Z
--- license: mit language: - en - tr pipeline_tag: text-generation library_name: transformers tags: - conversational - llama - deepseek - meta - qwen --- <p align="center"> <a href="https://thelucy.tech"><b>Powered by the Lucy</b></a> </p> ## Model Information The LLuMi multilingual large language model (LLM) is an instruction tuned generative model in 70B (text in/text out). LLuMi builds upon this robust foundation by incorporating additional refinements and distillation techniques inspired by the DeepSeek-R1 framework. This results in a model that not only retains the original strengths of Llama 3.3 but also delivers improved performance and efficiency for real-world applications. LLuMi exhibits sophisticated chain-of-thought behaviors, improved self-verification, and reduced issues such as repetition and language mixing. <p align="center"> <a href="[email protected]"><a>[email protected]</a></a> </p> **Model Release Date:** * **LLuMi Think LLM Family: February 24, 2025** ## 1. Introduction We introduce LLuMi, a state-of-the-art multilingual large language model (LLM) built on the robust Llama 3.3 70B architecture. LLuMi is instruction tuned to excel in real-world applications, particularly in multilingual dialogue and complex reasoning tasks. Leveraging advanced refinements and distillation techniques inspired by the DeepSeek-R1 framework, LLuMi not only retains the core strengths of its Llama 3.3 foundation but also delivers enhanced performance and efficiency. By integrating large-scale reinforcement learning directly on the base model, LLuMi exhibits sophisticated chain-of-thought behaviors, improved self-verification, and reduced issues such as repetition and language mixing. To support the research community and foster further innovation, we are releasing the full LLuMi model along with a range of distilled checkpoints across various sizes. This initiative empowers researchers to deploy both the complete model and resource-efficient distilled versions for diverse applications. NOTE: Before deploying LLuMi locally, please review the How to use & Usage Recommendations section for detailed guidelines and best practices. **Distillation: Unlocking the Power of Smaller Models** - We demonstrate that the advanced reasoning patterns of larger models can be distilled into smaller, more efficient models. This approach yields improved performance compared to the reasoning strategies derived solely via reinforcement learning on smaller models. The open source DeepSeek-R1 frameworkโ€”and its APIโ€”play a crucial role in enabling the research community to distill and develop even more powerful smaller models in the future. - Leveraging the rich reasoning data generated by DeepSeek-R1, we fine-tuned LLuMiโ€”a dense, instruction-tuned model built upon the Llama 3.3 70B architecture. Our evaluation results show that the distilled LLuMi model performs exceptionally well on various benchmarks, often matching or even surpassing the performance of larger models. - Furthermore, we are excited to open-source the full LLuMi model along with a series of distilled checkpoints across multiple sizesโ€”including 3B, 8B, and 70Bโ€”based on the LLuMi framework. This initiative provides the research community with access to both the complete model and its distilled versions, enabling a wide range of applications with varying computational needs. **Post-Training: Large-Scaling Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base LLuMi model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach enables LLuMi to explore advanced chain-of-thought (CoT) capabilities for tackling complex problems, leading to enhanced self-verification, reflective reasoning, and the generation of extended CoTs. Notably, LLuMi is among the first open research initiatives to demonstrate that the reasoning capabilities of large language models can be effectively incentivized purely through RL, without the need for an initial SFT phase. This breakthrough paves the way for future advancements in scalable reinforcement learning strategies for LLMs. We introduce our comprehensive pipeline for developing LLuMi inspired from DeepSeek-R1, which includes: - Two RL Stages: Designed to discover improved reasoning patterns and align the model with human preferences. - Two SFT Stages: Serving as the foundational seed for both the modelโ€™s reasoning and non-reasoning capabilities. We believe this innovative pipeline will not only enhance LLuMi's performance but also benefit the industry by inspiring the creation of more robust and efficient models. ## 2. Model Distillation and GRPO-Based Thinking Enhancement The LLuMi 70B model has been meticulously developed using the advanced techniques of DeepSeek-R1 Distill Llama 3.3 70B. By leveraging state-of-the-art distillation methods, LLuMi 70B not only retains the powerful multilingual and instruction-tuned capabilities of its foundation but also achieves enhanced performance and efficiency for diverse real-world applications. Furthermore, inspired by the successes of DeepSeek-R1, we have infused our smaller LLuMi 8B and 3B models with a unique thinking property through the use of GRPO (Guided Reasoning Policy Optimization). This innovative approach endows the smaller models with sophisticated chain-of-thought reasoning and reflective problem-solving abilitiesโ€”ensuring that even with fewer parameters, they can deliver agile and context-aware responses. Together, these advancements underscore our commitment to creating a versatile family of models that scale seamlessly from 3B to 70B, providing powerful solutions tailored to various computational and application needs. ## 3. Model Downloads ### LLuMi Think Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | LLuMi Think 3B | [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) | [๐Ÿค— HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_3B) | | LLuMi Think 8B | [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | [๐Ÿค— HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_8B) | | LLuMi Think 70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [๐Ÿค— HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_70B) | </div> ## 4. How to use This repository contains three versions of LLuMi Think LLM Models, for use with transformers and with bitsandbytes codebase. - **Use with transformers** Starting with `transformers >= 4.48.3` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "thellumi/LLuMi_Think_70B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "user", "content": "Why are tomatoes red?"}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` - **Use `bitsandbytes`** The model checkpoints can be used in `8-bit` and `4-bit` for further memory optimisations using `bitsandbytes` and `transformers` See the snippet below for usage: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "thellumi/LLuMi_Think_70B" quantization_config = BitsAndBytesConfig(load_in_8bit=True) quantized_model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config) tokenizer = AutoTokenizer.from_pretrained(model_id) input_text = "Why are tomatoes red?" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") output = quantized_model.generate(**input_ids, max_new_tokens=10) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` To load in 4-bit simply pass `load_in_4bit=True` ### 5. Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. Additionally, DeepSeek have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.** ## 6. Training Data **Overview:** LLuMi is built upon the robust Llama 3.3 architecture, which was pretrained on approximately 15 trillion tokens sourced from publicly available datasets. For fine-tuning, LLuMi leverages a combination of publicly available instruction datasets and over 10 million examples sourced from Hugging Face. This comprehensive training corpus has been curated to ensure high performance across various languages, with dedicated support for Turkish and other languages. **Data Freshness:** The pretraining data includes content up to a cutoff date of Aug. 2024, ensuring that LLuMi is aligned with recent language trends and developments. ## 7. Benchmarks | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | OpenAI o1-1217 | 79.2 | - | 96.4 | 75.7 | 63.4 | 2061 | | OpenAI o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | 1820 | | OpenAI GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek R1 | 79.8 | - | 97.3 | 71.5 | 65.9 | 2209 | | LLuMi Think 70B | 69.3 | 86.4 | 94.1 | 64.8 | 56.9 | 1603 | **Note on Benchmark Results:** Due to hardware limitations, full-scale benchmark tests could not be performed, and the results may vary. We remain fully transparent about these constraints and are actively working towards securing the necessary resources to conduct comprehensive evaluations in the near future. ## 8. Responsibility & Safety At LLuMi, we are committed to promoting responsible and ethical use of our technology. We recognize that large language models carry inherent risks and potential for misuse, and we have taken several measures to mitigate these challenges: - **Bias Mitigation:** We have implemented various strategies during training to minimize biases in model outputs. However, users should be aware that, despite these efforts, occasional biases or unintended outputs may still occur. - **Usage Guidelines:** LLuMi is designed for research and responsible deployment. We strongly encourage users to adhere to ethical guidelines, applicable laws, and best practices when using the model. Generating harmful, misleading, or offensive content is strictly prohibited. - **Safety Measures:** Users deploying LLuMi in real-world applications should implement additional safety filters and monitoring mechanisms. We recommend regular audits and evaluations to ensure that the modelโ€™s outputs remain within acceptable ethical boundaries. - **Community Engagement:** We invite the community to provide feedback on any safety or ethical issues encountered during usage. This collaborative approach is vital for continuously refining the model and addressing potential risks. - **Transparency and Accountability:** By open-sourcing LLuMi, we aim to foster transparency and accountability. We commit to ongoing research and updates focused on improving the model's safety and ethical performance. By using LLuMi, you agree to follow these guidelines and contribute to a safer, more responsible AI ecosystem. ## 9. License This code repository and the model weights are licensed under the [MIT License](https://choosealicense.com/licenses/mit/). LLuMi Think series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - LLuMi Think 3B is derived from [Qwen-2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE). - LLuMi Think 8B is derived from [Llama3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - LLuMi Think 70B is derived from [Llama3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 10. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ``` @misc{thellumi, author = {The Lucy}, month = feb, title = {{LLuMi Think}}, howpublished = {https://llumi.tech}, year = {2025} } ``` ## 11. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
nomnoos37/250216-Mistral-Nemo-ggls-v1.3.6-0.5-1-epoch
nomnoos37
2025-02-26T00:27:31Z
0
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit", "base_model:quantized:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-02-26T00:02:30Z
--- base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mistral - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** nomnoos37 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
thellumi/LLuMi_Think_3B
thellumi
2025-02-26T00:27:13Z
2
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "llama", "deepseek", "meta", "qwen", "en", "tr", "arxiv:2501.12948", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-24T19:59:05Z
--- license: mit language: - en - tr pipeline_tag: text-generation library_name: transformers tags: - conversational - llama - deepseek - meta - qwen --- <p align="center"> <a href="https://thelucy.tech"><b>Powered by the Lucy</b></a> </p> ## Model Information The LLuMi multilingual large language model (LLM) is an instruction tuned generative model in 70B (text in/text out). LLuMi builds upon this robust foundation by incorporating additional refinements and distillation techniques inspired by the DeepSeek-R1 framework. This results in a model that not only retains the original strengths of Llama 3.3 but also delivers improved performance and efficiency for real-world applications. LLuMi exhibits sophisticated chain-of-thought behaviors, improved self-verification, and reduced issues such as repetition and language mixing. <p align="center"> <a href="[email protected]"><a>[email protected]</a></a> </p> **Model Release Date:** * **LLuMi Think LLM Family: February 24, 2025** ## 1. Introduction We introduce LLuMi, a state-of-the-art multilingual large language model (LLM) built on the robust Llama 3.3 70B architecture. LLuMi is instruction tuned to excel in real-world applications, particularly in multilingual dialogue and complex reasoning tasks. Leveraging advanced refinements and distillation techniques inspired by the DeepSeek-R1 framework, LLuMi not only retains the core strengths of its Llama 3.3 foundation but also delivers enhanced performance and efficiency. By integrating large-scale reinforcement learning directly on the base model, LLuMi exhibits sophisticated chain-of-thought behaviors, improved self-verification, and reduced issues such as repetition and language mixing. To support the research community and foster further innovation, we are releasing the full LLuMi model along with a range of distilled checkpoints across various sizes. This initiative empowers researchers to deploy both the complete model and resource-efficient distilled versions for diverse applications. NOTE: Before deploying LLuMi locally, please review the How to use & Usage Recommendations section for detailed guidelines and best practices. **Distillation: Unlocking the Power of Smaller Models** - We demonstrate that the advanced reasoning patterns of larger models can be distilled into smaller, more efficient models. This approach yields improved performance compared to the reasoning strategies derived solely via reinforcement learning on smaller models. The open source DeepSeek-R1 frameworkโ€”and its APIโ€”play a crucial role in enabling the research community to distill and develop even more powerful smaller models in the future. - Leveraging the rich reasoning data generated by DeepSeek-R1, we fine-tuned LLuMiโ€”a dense, instruction-tuned model built upon the Llama 3.3 70B architecture. Our evaluation results show that the distilled LLuMi model performs exceptionally well on various benchmarks, often matching or even surpassing the performance of larger models. - Furthermore, we are excited to open-source the full LLuMi model along with a series of distilled checkpoints across multiple sizesโ€”including 3B, 8B, and 70Bโ€”based on the LLuMi framework. This initiative provides the research community with access to both the complete model and its distilled versions, enabling a wide range of applications with varying computational needs. **Post-Training: Large-Scaling Reinforcement Learning on the Base Model** - We directly apply reinforcement learning (RL) to the base LLuMi model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach enables LLuMi to explore advanced chain-of-thought (CoT) capabilities for tackling complex problems, leading to enhanced self-verification, reflective reasoning, and the generation of extended CoTs. Notably, LLuMi is among the first open research initiatives to demonstrate that the reasoning capabilities of large language models can be effectively incentivized purely through RL, without the need for an initial SFT phase. This breakthrough paves the way for future advancements in scalable reinforcement learning strategies for LLMs. We introduce our comprehensive pipeline for developing LLuMi inspired from DeepSeek-R1, which includes: - Two RL Stages: Designed to discover improved reasoning patterns and align the model with human preferences. - Two SFT Stages: Serving as the foundational seed for both the modelโ€™s reasoning and non-reasoning capabilities. We believe this innovative pipeline will not only enhance LLuMi's performance but also benefit the industry by inspiring the creation of more robust and efficient models. ## 2. Model Distillation and GRPO-Based Thinking Enhancement The LLuMi 70B model has been meticulously developed using the advanced techniques of DeepSeek-R1 Distill Llama 3.3 70B. By leveraging state-of-the-art distillation methods, LLuMi 70B not only retains the powerful multilingual and instruction-tuned capabilities of its foundation but also achieves enhanced performance and efficiency for diverse real-world applications. Furthermore, inspired by the successes of DeepSeek-R1, we have infused our smaller LLuMi 8B and 3B models with a unique thinking property through the use of GRPO (Guided Reasoning Policy Optimization). This innovative approach endows the smaller models with sophisticated chain-of-thought reasoning and reflective problem-solving abilitiesโ€”ensuring that even with fewer parameters, they can deliver agile and context-aware responses. Together, these advancements underscore our commitment to creating a versatile family of models that scale seamlessly from 3B to 70B, providing powerful solutions tailored to various computational and application needs. ## 3. Model Downloads ### LLuMi Think Models <div align="center"> | **Model** | **Base Model** | **Download** | | :------------: | :------------: | :------------: | | LLuMi Think 3B | [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) | [๐Ÿค— HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_3B) | | LLuMi Think 8B | [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) | [๐Ÿค— HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_8B) | | LLuMi Think 70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [๐Ÿค— HuggingFace](https://huggingface.co/thellumi/LLuMi_Think_70B) | </div> ## 4. How to use This repository contains three versions of LLuMi Think LLM Models, for use with transformers and with bitsandbytes codebase. - **Use with transformers** Starting with `transformers >= 4.48.3` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function. Make sure to update your transformers installation via `pip install --upgrade transformers`. See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "thellumi/LLuMi_Think_70B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "user", "content": "Why are tomatoes red?"}, ] outputs = pipeline( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` - **Use `bitsandbytes`** The model checkpoints can be used in `8-bit` and `4-bit` for further memory optimisations using `bitsandbytes` and `transformers` See the snippet below for usage: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "thellumi/LLuMi_Think_70B" quantization_config = BitsAndBytesConfig(load_in_8bit=True) quantized_model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config) tokenizer = AutoTokenizer.from_pretrained(model_id) input_text = "Why are tomatoes red?" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") output = quantized_model.generate(**input_ids, max_new_tokens=10) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` To load in 4-bit simply pass `load_in_4bit=True` ### 5. Usage Recommendations **We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:** 1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs. 2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.** 3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}." 4. When evaluating model performance, it is recommended to conduct multiple tests and average the results. Additionally, DeepSeek have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance. **To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.** ## 6. Training Data **Overview:** LLuMi is built upon the robust Llama 3.3 architecture, which was pretrained on approximately 15 trillion tokens sourced from publicly available datasets. For fine-tuning, LLuMi leverages a combination of publicly available instruction datasets and over 10 million examples sourced from Hugging Face. This comprehensive training corpus has been curated to ensure high performance across various languages, with dedicated support for Turkish and other languages. **Data Freshness:** The pretraining data includes content up to a cutoff date of Aug. 2024, ensuring that LLuMi is aligned with recent language trends and developments. ## 7. Benchmarks | Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating | |------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------| | Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 | | OpenAI o1-1217 | 79.2 | - | 96.4 | 75.7 | 63.4 | 2061 | | OpenAI o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | 1820 | | OpenAI GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 | | QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 | | DeepSeek R1 | 79.8 | - | 97.3 | 71.5 | 65.9 | 2209 | | LLuMi Think 70B | 69.3 | 86.4 | 94.1 | 64.8 | 56.9 | 1603 | **Note on Benchmark Results:** Due to hardware limitations, full-scale benchmark tests could not be performed, and the results may vary. We remain fully transparent about these constraints and are actively working towards securing the necessary resources to conduct comprehensive evaluations in the near future. ## 8. Responsibility & Safety At LLuMi, we are committed to promoting responsible and ethical use of our technology. We recognize that large language models carry inherent risks and potential for misuse, and we have taken several measures to mitigate these challenges: - **Bias Mitigation:** We have implemented various strategies during training to minimize biases in model outputs. However, users should be aware that, despite these efforts, occasional biases or unintended outputs may still occur. - **Usage Guidelines:** LLuMi is designed for research and responsible deployment. We strongly encourage users to adhere to ethical guidelines, applicable laws, and best practices when using the model. Generating harmful, misleading, or offensive content is strictly prohibited. - **Safety Measures:** Users deploying LLuMi in real-world applications should implement additional safety filters and monitoring mechanisms. We recommend regular audits and evaluations to ensure that the modelโ€™s outputs remain within acceptable ethical boundaries. - **Community Engagement:** We invite the community to provide feedback on any safety or ethical issues encountered during usage. This collaborative approach is vital for continuously refining the model and addressing potential risks. - **Transparency and Accountability:** By open-sourcing LLuMi, we aim to foster transparency and accountability. We commit to ongoing research and updates focused on improving the model's safety and ethical performance. By using LLuMi, you agree to follow these guidelines and contribute to a safer, more responsible AI ecosystem. ## 9. License This code repository and the model weights are licensed under the [MIT License](https://choosealicense.com/licenses/mit/). LLuMi Think series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: - LLuMi Think 3B is derived from [Qwen-2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE). - LLuMi Think 8B is derived from [Llama3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). - LLuMi Think 70B is derived from [Llama3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE). ## 10. Citation ``` @misc{deepseekai2025deepseekr1incentivizingreasoningcapability, title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning}, author={DeepSeek-AI}, year={2025}, eprint={2501.12948}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.12948}, } ``` ``` @misc{thellumi, author = {The Lucy}, month = feb, title = {{LLuMi Think}}, howpublished = {https://llumi.tech}, year = {2025} } ``` ## 11. Contact If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
bowilleatyou/c71c1479-b8d6-4202-854f-fd8c4ed1b600
bowilleatyou
2025-02-26T00:26:34Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-02-25T21:52:02Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tttx/model-250-force-022525
tttx
2025-02-26T00:26:30Z
0
0
peft
[ "peft", "safetensors", "qwen2", "alignment-handbook", "trl", "sft", "generated_from_trainer", "dataset:tttx/250-force-022525", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "license:mit", "region:us" ]
null
2025-02-26T00:09:54Z
--- library_name: peft license: mit base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B tags: - alignment-handbook - trl - sft - generated_from_trainer datasets: - tttx/250-force-022525 model-index: - name: model-250-force-022525 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model-250-force-022525 This model is a fine-tuned version of [tttx/sft-32b-020925-19k-5ep](https://huggingface.co/tttx/sft-32b-020925-19k-5ep) on the tttx/250-force-022525 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 100 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.13.2 - Transformers 4.47.0.dev0 - Pytorch 2.4.0+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
markldn/b1-Q4_K_M-GGUF
markldn
2025-02-26T00:23:19Z
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "base_model:straykittycat/b1", "base_model:quantized:straykittycat/b1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-02-26T00:22:57Z
--- library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: straykittycat/b1 --- # markldn/b1-Q4_K_M-GGUF This model was converted to GGUF format from [`straykittycat/b1`](https://huggingface.co/straykittycat/b1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/straykittycat/b1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo markldn/b1-Q4_K_M-GGUF --hf-file b1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo markldn/b1-Q4_K_M-GGUF --hf-file b1-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo markldn/b1-Q4_K_M-GGUF --hf-file b1-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo markldn/b1-Q4_K_M-GGUF --hf-file b1-q4_k_m.gguf -c 2048 ```
apitchai/Llama-3.2-3B-Instruct-F1-NLQ-CoT-5-Epochs-Finetuned-16bit
apitchai
2025-02-26T00:22:59Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-02-26T00:22:31Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** apitchai - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
texanrangee/e221c6c4-e718-44d1-9176-bccd9d7d777a
texanrangee
2025-02-26T00:20:45Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-02-25T22:03:33Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Exurbia-Delta9-i1-GGUF
mradermacher
2025-02-26T00:18:47Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ClaudioItaly/Exurbia-Delta9", "base_model:quantized:ClaudioItaly/Exurbia-Delta9", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-02-25T22:52:21Z
--- base_model: ClaudioItaly/Exurbia-Delta9 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/ClaudioItaly/Exurbia-Delta9 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Exurbia-Delta9-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-IQ4_NL.gguf) | i1-IQ4_NL | 5.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q4_1.gguf) | i1-Q4_1 | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Exurbia-Delta9-i1-GGUF/resolve/main/Exurbia-Delta9.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/AtmaLLaMA-GGUF
mradermacher
2025-02-26T00:18:47Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:RakshitAi/AtmaLLaMA", "base_model:quantized:RakshitAi/AtmaLLaMA", "license:mit", "endpoints_compatible", "region:us" ]
null
2025-02-25T19:07:30Z
--- base_model: RakshitAi/AtmaLLaMA language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/RakshitAi/AtmaLLaMA <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/AtmaLLaMA-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/AtmaLLaMA-GGUF/resolve/main/AtmaLLaMA.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Blazgo/temp-model-for-2-mini-004
Blazgo
2025-02-26T00:17:01Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:CultriX/Qwen2.5-14B-ReasoningMerge", "base_model:merge:CultriX/Qwen2.5-14B-ReasoningMerge", "base_model:arcee-ai/Virtuoso-Small-v2", "base_model:merge:arcee-ai/Virtuoso-Small-v2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-26T00:11:25Z
--- base_model: - CultriX/Qwen2.5-14B-ReasoningMerge - arcee-ai/Virtuoso-Small-v2 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [arcee-ai/Virtuoso-Small-v2](https://huggingface.co/arcee-ai/Virtuoso-Small-v2) as a base. ### Models Merged The following models were included in the merge: * [CultriX/Qwen2.5-14B-ReasoningMerge](https://huggingface.co/CultriX/Qwen2.5-14B-ReasoningMerge) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: arcee-ai/Virtuoso-Small-v2 parameters: density: 0.5 weight: 0.5 - model: CultriX/Qwen2.5-14B-ReasoningMerge parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: arcee-ai/Virtuoso-Small-v2 parameters: normalize: false int8_mask: true dtype: float16 ```
Maxymin/distilbert-base-uncased-finetuned-squad
Maxymin
2025-02-26T00:14:53Z
1
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2025-02-23T08:59:05Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2610 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.1697 | 1.0 | 5533 | 1.1382 | | 0.8147 | 2.0 | 11066 | 1.1588 | | 0.6341 | 3.0 | 16599 | 1.2610 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
byKim93/klue-roberta-base-klue-sts-mrc-2
byKim93
2025-02-26T00:12:19Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:17552", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:byKim93/klue-roberta-base-klue-sts-2", "base_model:finetune:byKim93/klue-roberta-base-klue-sts-2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-02-26T00:12:06Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:17552 - loss:MultipleNegativesRankingLoss base_model: byKim93/klue-roberta-base-klue-sts-2 widget: - source_sentence: ๋ฏธ๊ตญ์—์„œ ๋‘ ๋ฒˆ์งธ๋กœ ๋งŽ์€ ์œ ํ•™์ƒ ๊ตญ์ ์€? sentences: - ๋ฐ•๊ทผํ˜œ ๋Œ€ํ†ต๋ น์ด 17์ผ ํฌํ•ญ์ œ์ฒ ์†Œ ๋‚ด ํŒŒ์ด๋„ฅ์Šค 3๊ณต์žฅ์„ ์ฐพ์€ ๊ฒƒ์€ ์„ธ๊ณ„ ์ œ์ฒ  ๊ธฐ์ˆ ์„ ์„ ๋„ํ•˜๋Š” ํ•ต์‹ฌ ์‚ฌ์—…์ด๋ผ๋Š” ์ ์„ ํ‰๊ฐ€ํ•œ ๊ฒƒ์ด๋ผ๊ณ  ํฌ์Šค์ฝ” ์ธก์€ ์„ค๋ช…ํ–ˆ๋‹ค.ํŒŒ์ด๋„ฅ์Šค 3๊ณต์žฅ์€ ์ง€๋‚œ 1์›” ๊ฐ€๋™์„ ์‹œ์ž‘ํ–ˆ๋‹ค. ํ•˜๋ฃจ 5700, ์—ฐ 200๋งŒ์˜ ์‡ณ๋ฌผ์„ ๋ฝ‘์•„๋‚ด๊ณ  ์žˆ๋‹ค. ํฌ์Šค์ฝ” ๊ด€๊ณ„์ž๋Š” โ€œ์ด๊ณณ์—์„œ ์ƒ์‚ฐํ•œ ์‡ณ๋ฌผ์€ ๋ชจ๋‘ ์ œ๊ฐ•๊ณต์žฅ์—์„œ ์‚ฌ์šฉ๋œ๋‹คโ€๋ฉฐ โ€œ๊ธฐ์กด์˜ ๊ณ ๋กœ์—์„œ ๋‚˜์˜จ ์‡ณ๋ฌผ๊ณผ ํ’ˆ์งˆ์— ์ „ํ˜€ ์ฐจ์ด๊ฐ€ ์—†๋‹คโ€๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ํฌ์Šค์ฝ”๋Š” 1992๋…„ ํŒŒ์ด๋„ฅ์Šค ๊ณต๋ฒ• ๊ธฐ์ˆ  ๊ฐœ๋ฐœ์— ์ฐฉ์ˆ˜ํ•ด 11๋…„ ๋งŒ์ธ 2003๋…„ ์—ฐ 60๋งŒ ๊ทœ๋ชจ์˜ 1๊ณต์žฅ ๊ฐ€๋™์„ ์‹œ์ž‘ํ–ˆ๋‹ค.ํฌ์Šค์ฝ” ๊ด€๊ณ„์ž๋Š” โ€œ๋‹ค๋ฅธ ์ฒ ๊ฐ•์—…์ฒด๋“ค๋„ ํŒŒ์ด๋„ฅ์Šค์™€ ๊ฐ™์€ ์šฉ์„ ๊ธฐ์ˆ  ๊ฐœ๋ฐœ์— ๋‚˜์„ฐ์ง€๋งŒ ๋ชจ๋‘ ์‹คํŒจํ–ˆ๋‹คโ€๋ฉฐ โ€œ์ด์— ํ•ด์™ธ ์—…์ฒด๋“ค๋กœ๋ถ€ํ„ฐ ๊ธฐ์ˆ ์ˆ˜์ถœ ์š”์ฒญ์ด ์ด์–ด์ง€๊ณ  ์žˆ๋‹คโ€๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ์‹ค์ œ๋กœ 3๊ณต์žฅ ๊ฐ€๋™์œผ๋กœ ์œ ํœด์„ค๋น„๊ฐ€ ๋œ 1๊ณต์žฅ ์„ค๋น„๋Š” ์ธ๋„์˜ ๋ฉ”์Šค์ฝ”์Šคํ‹ธ์ด ๊ด€์‹ฌ์„ ๋ณด์—ฌ ์ง€๋‚œ 8์›” ์„ค๋น„ ๋งค๊ฐ์— ๊ด€ํ•œ ์–‘ํ•ด๊ฐ์„œ(MOU)๋ฅผ ์ฒด๊ฒฐํ–ˆ๋‹ค. ์ค‘๊ตญ ์ถฉ์นญ๊ฐ•์ฒ ๊ณผ ํ•จ๊ป˜ ์ถ”์ง„ ์ค‘์ธ ์—ฐ์‚ฐ 300๋งŒt ๊ทœ๋ชจ์˜ ์ถฉ์นญ ํŒŒ์ด๋„ฅ์Šค ๊ณต์žฅ๋„ ๋‚ด๋…„ ์ค‘ ์ฒซ ์‚ฝ์„ ๋œฐ ์˜ˆ์ •์ด๋‹ค.ํฌ์Šค์ฝ”๋Š” ํŒŒ์ด๋„ฅ์Šค ๊ณต๋ฒ•์ด ๊ธฐ์กด ๊ณ ๋กœ ๋ฐฉ์‹๋ณด๋‹ค ์ƒ์‚ฐ๋น„์šฉ์ด ์ €๋ ดํ•˜๊ณ  ํ™˜๊ฒฝ์นœํ™”์ ์ธ ๋งŒํผ ํ•ด์™ธ ์ˆ˜์ถœ์ด ํ™•๋Œ€๋  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€ํ•˜๊ณ  ์žˆ๋‹ค. ํŒŒ์ด๋„ฅ์Šค๋Š” ๊ณ ๋กœ ๋ฐฉ์‹์— ๋น„ํ•ด ํ™ฉ์‚ฐํ™”๋ฌผ(SOx)๊ณผ ์งˆ์†Œ์‚ฐํ™”๋ฌผ(NOx) ๋ฐฐ์ถœ๋Ÿ‰์ด ๊ฐ๊ฐ 60%, 85% ์ •๋„ ์ ๋‹ค. ํšŒ์‚ฌ ๊ด€๊ณ„์ž๋Š” โ€œ๊ณต์žฅ ์„ค๋น„์˜ 85%๋ฅผ ๊ตญ๋‚ด 37๊ฐœ ์ค‘์†Œ๊ธฐ์—…์—์„œ ์ œ์ž‘ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ํ•ด์™ธ์— ์ˆ˜์ถœํ•˜๋ฉด ์ค‘์†Œ๊ธฐ์—… ๋™๋ฐ˜์„ฑ์žฅ ํšจ๊ณผ๋ฅผ ๊ธฐ๋Œ€ํ•  ์ˆ˜ ์žˆ๋‹คโ€๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค. - ํ•œ๊ตญ ์‚ฌ๋žŒ๋“ค์€ ์ข…์ข… ์–‘ ๊ทน๋‹จ์„ ์˜ค๊ฐ„๋‹ค. ๊ตํšŒ์— ๋‚˜๊ฐ€๋ฉด์„œ ์ ์„ ๋ณด๋Š”๊ฐ€ ํ•˜๋ฉด, ์ ˆ์— ๋‹ค๋‹ˆ๋ฉด์„œ ์ •ํ™”์ˆ˜๋ฅผ ๋– ๋†“๊ณ  ๋ฏผ๊ฐ„์‹ ์•™์„ ์ง€ํ‚จ๋‹ค. ์‚ฐ์‹ ์— ์น˜์„ฑ์„ ๋“œ๋ฆฌ๋ฉด์„œ ์œ ๊ต์ ์ธ ์ œ์‚ฌ๋ฅผ ์ง€๋‚ด๊ธฐ๋„ ํ•œ๋‹ค. ํ•œ์˜ฅ์—๋Š” ๋‚จ๋ฐฉ๋ฌธํ™”์˜ ์ƒ์ง•์ธ ๋Œ€์ฒญ๋งˆ๋ฃจ์™€ ๋ถ๋ฐฉ์—์„œ ์œ ๋ž˜ํ•œ ์˜จ๋Œ์„ ํ•จ๊ป˜ ๋งŒ๋“ค์—ˆ๋‹ค. ํ•œ(ๆจ)์œผ๋กœ ์ง„ ์‘์–ด๋ฆฌ๋ฅผ ํฅ(่ˆˆ)์œผ๋กœ ํ’€์–ด๋‚ธ๋‹ค. ใ€Š๊ทน๋‹จ์˜ ํ•œ๊ตญ์ธ, ๊ทน๋‹จ์˜ ์ฐฝ์กฐ์„ฑใ€‹์€ โ€˜๊ทน๋‹จโ€™์ด๋ž€ ์—ด์‡ณ๋ง๋กœ ํ•œ๊ตญ์ธ์˜ ๊ธฐ์งˆ์„ ๋ถ„์„ํ•œ ์ฑ…์ด๋‹ค. ์ €์ž๋Š” ๊ทน๋‹จ์„ ํฌ์šฉํ•˜๋Š” ํ•œ๊ตญ์ธ์˜ ํŠน์ง•์„ ๋„ค ๊ฐ€์ง€๋กœ ๋ถ„๋ฅ˜ํ•œ๋‹ค. ํ•œ๊ตญ์ธ์€ ๊ทน๋‹จ๊ณผ ๊ทน๋‹จ์„ ์ˆ˜์šฉํ•˜๊ณ , ๊ทน๋‹จ์„ ๋„˜๋‚˜๋“ค๊ณ , ๊ทน๋‹จ์˜ ์ค‘๊ฐ„์ง€๋Œ€๋ฅผ ๋งŒ๋“ค์–ด ์ถฉ๋Œ์„ ํ”ผํ•˜๊ณ , ๋ถ€๋ถ„์„ ๊นจ๋ถ€์ˆ˜์–ด์„œ ๋” ํฐ ํ†ตํ•ฉ์„ ๋งŒ๋“ค์–ด๋‚ธ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ์ €์ž๋Š” โ€œํ•œ๊ตญ์ธ์€ ์„œ๋กœ ๋Œ€์ฒ™์ ์— ์žˆ๋Š” ๊ฒƒ๋“ค์„ ๋Œ์–ด์•ˆ๊ณ , ๋‚˜์•„๊ฐ€ ์—ฌ๋Ÿฌ ๊ฐ€์ง€๋ฅผ ์šฉ๊ด‘๋กœ์— ๋„ฃ๊ณ  ์œต๋ณตํ•ฉํ•ด์„œ ์ƒˆ๋กœ์šด ๊ฒƒ์„ ๋ฝ‘์•„๋‚ธ๋‹คโ€๋ฉฐ โ€œ์ด๊ฒƒ์ด ํ•œ๋ฏผ์กฑ์ด ๋ฐœ์ „ํ•  ์ˆ˜๋ฐ–์— ์—†๋Š” ์ด์œ โ€๋ผ๊ณ  ๋งํ•œ๋‹ค.ํ•œ๊ตญ์ธ์€ โ€˜๋นจ๋ฆฌ๋นจ๋ฆฌโ€™๋ฅผ โ€˜์€๊ทผ๊ณผ ๋ˆ๊ธฐโ€™ ์žˆ๊ฒŒ ํ•˜๋Š” ๋ฏผ์กฑ์ด๋‹ค. ์ €์ž๋Š” โ€œ์–ด๋А ๋ฏผ์กฑ์ด ๋นจ๋ฆฌ๋นจ๋ฆฌ ํ•˜๋ฉด์„œ ์™„์„ฑ๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ๋А๋ƒโ€๋ฉฐ โ€œ์–ต์ฒ™์Šค๋Ÿฝ๊ฒŒ ๋†€๊ณ  ์–ต์ฒ™์Šค๋Ÿฝ๊ฒŒ ์ผํ•˜๋Š” ์‚ฌ๋žŒ๋“ค์ด ํ•œ๊ตญ์ธโ€์ด๋ผ๊ณ  ๋งํ•œ๋‹ค. ๋™์‹œ์— โ€œ์กฐ์„ ์‹œ๋Œ€ ๊ถ์—์„œ๋Š” 500๋…„์„ ํ•˜๋ฃจ๋„ ๋น ์ง์—†์ด ์™•์˜ ์ผ๊ฑฐ์ˆ˜์ผํˆฌ์กฑ์„ ๊ธฐ๋กํ–ˆ๊ณ  ๋ฐฑ์„ฑ๋“ค์€ ๋งค์ผ ๋…ผ์œผ๋กœ ๋‚˜๊ฐ€ ๋†์‚ฌ์ง“๋Š” ๊ณ ์—ญ์„ ๊ฐ๋‹นํ–ˆ๋‹คโ€๋ฉฐ โ€œํ•œ๊ตญ์ธ์€ ํ•˜๋‚˜๋ฅผ ์‹œ์ž‘ํ•˜๋ฉด ์ง€์น˜์ง€ ์•Š๊ณ  ์˜ค๋žœ ๊ธฐ๊ฐ„ ์ง€์†ํ•˜๋Š” ๋ˆ๊ธฐ๊ฐ€ ์žˆ๋Š” ์‚ฌ๋žŒ๋“คโ€์ด๋ผ๊ณ  ๋ถ„์„ํ•œ๋‹ค. ์„œ๋กœ ์ƒ์ถฉ๋ผ ๋ณด์ด๋Š” ๋‘ ๊ฐ€์ง€ ๊ธฐ์งˆ์ด ๊ณต์กดํ•˜๋Š” ๊ฒƒ์ด๋‹ค.์ €์ž๋Š” ์šฐ๋ฆฌ๋ง์—๋„ ๊ทน๋‹จ์„ ํฌ์šฉํ•˜๋Š” ๋ฌธํ™”๊ฐ€ ๋ฐ˜์˜๋๋‹ค๊ณ  ๋ณธ๋‹ค. ๋‚˜๋“ค์ด, ๋นผ๋‹ซ์ด, ์—ฌ๋‹ซ์ด ๋“ฑ ๋ฐ˜๋Œ€๋˜๋Š” ์š”์†Œ๋ฅผ ํ•˜๋‚˜๋กœ ๋ฌถ์€ ๋‹จ์–ด๊ฐ€ ์ˆ˜์—†์ด ๋งŽ๋‹ค๋Š” ๊ฒŒ ์ €์ž์˜ ์„ค๋ช…์ด๋‹ค. ํ•œ๊ตญ์˜ ์Œ์‹ ๋ฌธํ™”๋„ ์–‘ ๊ทน๋‹จ์„ ๋„˜๋‚˜๋“ ๋‹ค. ์ •์ฐฉ์˜ ์‚ฐ๋ฌผ์ธ ๋ฐœํšจ์‹ํ’ˆ์ด ์œ ๋‚œํžˆ ๋ฐœ๋‹ฌํ•œ ํ•œํŽธ ๊ฒ‰์ ˆ์ด, ์ƒ์ถ”์Œˆ ๊ฐ™์€ ์ž์—ฐ ์ƒํƒœ์˜ ์Œ์‹์„ ๊ทธ๋Œ€๋กœ ์ฆ๊ธฐ๊ธฐ๋„ ํ•œ๋‹ค. ์˜ค๋ž˜ ๋“์ด๋Š” ๋š๋ฐฐ๊ธฐ์™€ ํ•œ์ˆœ๊ฐ„์— ํŒŒ๋ฅด๋ฅด ๋“์–ด์˜ค๋ฅด๋Š” ์–‘์€๋ƒ„๋น„๋ฅผ ๋ชจ๋‘ ์• ์šฉํ•œ๋‹ค.ํ•œ๊ตญ์ธ์˜ ์ฐฝ์กฐ ์œ ์ „์ž๋Š” ๋•Œ๋กœ ์ ๊ทน์„ฑ์œผ๋กœ ํ‘œ์ถœ๋œ๋‹ค. ํ•ด์™ธ์— ๋‚˜๊ฐ€๋ณด๋ฉด ์–ด๋”œ ๊ฐ€๋„ ํ•œ ๋ฒˆ์€ ํ•œ๊ตญ ์‚ฌ๋žŒ์„ ๋งˆ์ฃผ์น  ๋งŒํผ ํ•œ๊ตญ์ธ๋“ค์€ ์„ธ๊ณ„ ๊ณณ๊ณณ์— ํผ์ ธ ์žˆ๋‹ค. ๋ฏธ๊ตญ ๋‚ด ์œ ํ•™์ƒ ์ˆ˜๋„ ์ค‘๊ตญ ์ธ๋„์— ์ด์–ด ์„ธ ๋ฒˆ์งธ๋กœ ๋งŽ๋‹ค. ์–ด๋”” ์ด๋ฟ์ผ๊นŒ. ์œ ๋Œ€์ธ์€ ์„ธ๊ณ„ 60์—ฌ ๊ฐœ๊ตญ์— ํฉ์–ด์ ธ ์‚ด๊ณ , ์ค‘๊ตญ์ธ์€ 100์—ฌ ๊ฐœ๊ตญ์—์„œ ์ด๋ฏผ์ž๋กœ ์‚ด๊ณ  ์žˆ๋Š”๋ฐ ์ธ๊ตฌ๊ฐ€ ๊ณ ์ž‘ 5000๋งŒ๋ช…์— ๋ถˆ๊ณผํ•œ ํ•œ๊ตญ ์‚ฌ๋žŒ๋“ค์€ 175๊ฐœ๊ตญ์— ์‚ถ์˜ ํ„ฐ์ „์„ ์žก์•˜๋‹ค. ์ €์ž๋Š” โ€œ์ƒˆ๋กœ์šด ๊ฒƒ์— ๋Œ€ํ•œ ํ˜ธ๊ธฐ์‹ฌ๊ณผ ๊ตฝํž ์ค„ ๋ชจ๋ฅด๋Š” ๋„์ „์ •์‹ ์ด ๋งŒ๋“ค์–ด๋‚ธ ๊ฒฐ๊ณผโ€๋ผ๋ฉฐ โ€œ๊ฐ€์ง„ ๊ฒƒ์ด๋ผ๊ณ ๋Š” ๋งจ๋ชธ๋ฟ์ธ ์‚ฌ๋žŒ๋“ค์ด ๊ทผ๋ฉด๊ณผ ์„ฑ์‹ค๋กœ ์„ธ๊ณ„ ๊ณณ๊ณณ์„ ํŒŒ๊ณ ๋“ค๊ณ  ์žˆ๋‹คโ€๊ณ  ๋งํ•œ๋‹ค. - ๋ณดํ—˜์ƒํ’ˆ๋„ ๊ธฐํ”„ํ‹ฐ์ฝ˜ ์ฃผ๊ณ ๋ฐ›๋Š”๋‹ค๋ฉด โ€ฆ.์ตœ์ง„ํ™˜ ํ˜„๋Œ€๋ผ์ดํ”„์ƒ๋ช… ๋Œ€ํ‘œ. ๋Œ€ํ˜•๋งˆํŠธ์—์„œ ๋ณดํ—˜์ƒํ’ˆ์„ ํŒ๋งคํ•˜๊ณ  ์ตœ๊ทผ์—๋Š” ์žํŒ๊ธฐ์—์„œ๋„ ๋ณดํ—˜์ƒํ’ˆ์„ ํŒ”๊ธฐ ์‹œ์ž‘ํ•ด ์ฃผ๋ชฉ์„ ๋ฐ›๊ณ  ์žˆ๋Š”๋ฐ. 20, 30๋Œ€๋ฅผ ๊ฒจ๋ƒฅํ•ด ๋ณดํ—˜์ƒํ’ˆ์„ ์„ ๋ฌผํ•˜๋Š” ๋ฐฉ์•ˆ๋„ ๊ตฌ์ƒํ•˜๊ณ  ์žˆ๋‹ค๊ณ . ํœด๋Œ€ํฐ์œผ๋กœ ๊ธฐํ”„ํ‹ฐ์ฝ˜์„ ์ฃผ๊ณ ๋ฐ›๋Š” ๊ฒƒ์ฒ˜๋Ÿผ ๋ณดํ—˜์ƒํ’ˆ ๊ธฐํ”„ํ‹ฐ์ฝ˜๋„ ์ฃผ๊ณ ๋ฐ›์„ ๋‚ ์ด ์˜ฌ์ง€.๋ฐ•์›์ˆœ โ€œ์ €์˜ ์žฌ์„ ์„ ์˜์‹ฌํ•˜๋Š” ๊ฒƒ ๊ฐ™๋‹คโ€19์ผ ์„œ์šธ์‹œ์ฒญ ๋ธŒ๋ฆฌํ•‘๋ฃธ. ๋ฐ•์›์ˆœ ์‹œ์žฅ์ด โ€˜์„œ๋ฏผ ์ฃผ๊ฑฐ์•ˆ์ • ๋Œ€์ฑ…โ€™์„ ๋ฐœํ‘œํ•œ ๋’ค ํ•œ ๊ธฐ์ž๊ฐ€ โ€œ์žฌ์„  ์—ฌ๋ถ€์™€ ์ƒ๊ด€์—†์ด ๊ณ„ํš์„ ์ถ”์ง„ํ•˜๋Š” ๋ฐ ๋ฌด๋ฆฌ๊ฐ€ ์—†๊ฒ ๋А๋ƒโ€๊ณ  ์งˆ๋ฌธ. 2018๋…„๊นŒ์ง€ ๋‹ฌ์„ฑํ•˜๊ฒ ๋‹ค๋Š” ๊ฒƒ์€ ์žฌ์„ ์„ ์—ผ๋‘์— ๋‘” ๊ณต์•ฝ ์•„๋‹ˆ๋ƒ๋Š” ์–˜๊ธฐ. ๋ฐ• ์‹œ์žฅ์€ โ€œ์ €์˜ ์žฌ์„ ์— ์ƒ๋‹นํ•œ ์˜๋ฌธ์„ ๊ฐ–๊ณ  ๊ณ„์‹  ๊ฒƒ ๊ฐ™๋‹คโ€๋ฉด์„œโ€ฆ.KT ๊ด‘๊ณ  ์† ์ฝง์ˆ˜์—ผ ์ธํ˜•์€ โ€˜์ง€๋“œ๋ž˜๊ณคโ€™?KT๊ฐ€ ๋ฐฉ์˜ ์ค‘์ธ โ€˜์˜ฌ๋ ˆ ๊ด‘๋Œ€์—ญ LTE-์ง€ํ•˜์ฒ ํŽธโ€™ ๊ด‘๊ณ ๊ฐ€ ๋…ผ๋ž€. ๋ชจ์ž๋ฅผ ์‚๋”ฑํ•˜๊ฒŒ ์“ฐ๊ณ  ์ฝง์ˆ˜์—ผ์„ ๊ธฐ๋ฅธ ์•„์ €์”จ๊ฐ€ โ€œ๊ด‘๋Œ€์—ญ, ๋นจ๋ผ์š” ๋นจ๋ผโ€๋ผ๊ณ  ๋งํ•˜์ž KT ๋ชจ๋ธ์ด โ€œ๋ชจ๋“  ์ง€ํ•˜์ฒ  ์•ˆ์—์„œ ๋‹ค ๋˜๋А๋ƒ?โ€๊ณ  ๋ฌป๊ณ  โ€œ์•ˆ ๋˜๋Š”๊ตฌ๋‚˜โ€๋ผ๊ณ  ๋งํ•˜๋Š”๋ฐ, ์ด ์•„์ €์”จ๊ฐ€ LG์œ ํ”Œ๋Ÿฌ์Šค ๊ด‘๊ณ  ๋ชจ๋ธ์ธ ์ง€๋“œ๋ž˜๊ณค์„ ๋‹ฎ์•˜์œผ๋‹ˆ.์ œํ”„ ๋ฒ ์ €์Šค์˜โ€˜์–ธ๋ก  ๋งˆ๋ฒ•โ€™์€ ํ†ตํ• ๊นŒ? - source_sentence: ๋ฏธ์„ธ๋จผ์ง€์— ์˜ํ•œ ์งˆํ™˜์ด ์•„๋‹Œ ๊ฒƒ์€ ๋ฌด์—‡์ธ๊ฐ€? sentences: - ๋ฏธ์„ธ๋จผ์ง€๊ฐ€ ๊ธฐ์Šน์„ ๋ถ€๋ฆฌ๊ณ  ์žˆ๋‹ค. ๋ฏธ์„ธ๋จผ์ง€๋Š” ๋ชธ์†์— ์Œ“์ด๋ฉด ํ์™€ ํ˜ˆ๊ด€ ๋“ฑ์— ๋ฌธ์ œ๋ฅผ ์ผ์œผํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. ํ˜ธํก๊ธฐ ์งˆํ™˜์ž์˜ ๊ฒฝ์šฐ ๊ธฐ์นจ, ์ฒœ์‹ ์ฆ์ƒ์ด ์•…ํ™”๋˜๊ธฐ๋„ ํ•œ๋‹ค. ์™ธ์ถœ ์‹œ ๋ฏธ์„ธ๋จผ์ง€ ์ฃผ์˜๋ณด ๋ฐœ๋ น ์—ฌ๋ถ€๋ฅผ ํ™•์ธํ•˜๊ณ  ๊ฐ์ข… ๊ฑด๊ฐ• ํ”ผํ•ด๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•ด ๋…ธ๋ ฅํ•ด์•ผ ํ•œ๋‹ค.๋ฏธ์„ธ๋จผ์ง€๋Š” ๊ณต๊ธฐ ์ค‘์— ๋– ๋Œ์•„๋‹ค๋‹ˆ๋Š” ์ค‘๊ธˆ์† ๋“ฑ์„ ๋งํ•œ๋‹ค. ์ง€๋ฆ„์ด 10๋งˆ์ดํฌ๋กœ๋ฏธํ„ฐ(ใŽ›, 1ใŽ›=100๋งŒ๋ถ„์˜ 1m)๋ณด๋‹ค ์ž‘์•„ ํ๋‚˜ ํ˜ˆ๊ด€์œผ๋กœ ๋“ค์–ด๊ฐˆ ์ˆ˜ ์žˆ๋‹ค. ๋ฏธ์„ธ๋จผ์ง€ ๋…ธ์ถœ์ด ์‚ฌ๋ง๋ฅ ์„ ๋†’์ธ๋‹ค๋Š” ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋„ ์žˆ๋‹ค. ๊ฐ‘์ž๊ธฐ ๋งŽ์€ ์–‘์˜ ๋ฏธ์„ธ๋จผ์ง€์— ๋…ธ์ถœ๋˜๋ฉด ๊ธฐ์นจ, ํ˜ธํก๊ณค๋ž€ ๋“ฑ์˜ ์ฆ์ƒ์„ ํ˜ธ์†Œํ•  ์ˆ˜ ์žˆ๋‹ค. ์ฒœ์‹์ด ์•…ํ™”๋˜๊ณ  ๋ถ€์ •๋งฅ์ด ์ƒ๊ธฐ๊ธฐ๋„ ํ•œ๋‹ค.๋ฏธ์„ธ๋จผ์ง€๋กœ ์ธํ•œ ๊ฑด๊ฐ•ํ”ผํ•ด๋ฅผ ๋ง‰๊ธฐ ์œ„ํ•ด์„œ๋Š” ์™ธ์ถœํ•  ๋•Œ ๋งˆ์Šคํฌ, ๋ณดํ˜ธ์•ˆ๊ฒฝ, ๋ชจ์ž ๋“ฑ์„ ์ฐฉ์šฉํ•˜๋Š” ๊ฒƒ์ด ์ข‹๋‹ค. ์ตœ์ฒœ์›… ๊ฐ•๋™๊ฒฝํฌ๋Œ€๋ณ‘์› ํ˜ธํก๊ธฐ๋‚ด๊ณผ ๊ต์ˆ˜๋Š” โ€œ๋ฏธ์„ธ๋จผ์ง€๋Š” ์ฃผ๋กœ ํ˜ธํก๊ธฐ๋ฅผ ํ†ตํ•ด ์ฒด๋‚ด๋กœ ๋“ค์–ด์˜จ๋‹คโ€๋ฉฐ โ€œ๋งŒ์„ฑํ์‡„์„ฑํ์งˆํ™˜ ๋“ฑ ๋งŒ์„ฑ ํ˜ธํก๊ธฐ ์งˆํ™˜์ž๋Š” ์™ธ์ถœ ์‹œ ํ™˜๊ฒฝ๋ถ€ ์ธ์ฆ๋งˆํฌ๊ฐ€ ์žˆ๋Š” ๋ฐฉ์ง„๋งˆ์Šคํฌ๋ฅผ ์ฐฉ์šฉํ•ด์•ผ ํ•œ๋‹คโ€๊ณ  ์กฐ์–ธํ–ˆ๋‹ค. ๋‚˜๊ฐ”๋‹ค ๋Œ์•„์˜ค๋ฉด ์ƒค์›Œ๋ฅผ ํ•ด ๋จธ๋ฆฌ์นด๋ฝ์ด๋‚˜ ์˜ท ๋“ฑ ๋ชธ์— ๋‚จ์•„ ์žˆ๋Š” ๋ฏธ์„ธ๋จผ์ง€๋ฅผ ์—†์• ์•ผ ํ•œ๋‹ค.๋ฏธ์„ธ๋จผ์ง€์™€ ํ•จ๊ป˜ ์„ธ๊ท  ๋“ฑ์ด ํ˜ธํก๊ธฐ๋ฅผ ํƒ€๊ณ  ๋ชธ์†์œผ๋กœ ๋“ค์–ด์˜ค๊ธฐ๋„ ํ•œ๋‹ค. ์ด๋•Œ ํ˜ธํก๊ธฐ๊ฐ€ ๊ฑด์กฐํ•˜๋ฉด ์™ธ๋ถ€์—์„œ ์นจํˆฌํ•œ ๊ท ์„ ๋ฐฐ์ถœํ•˜๋Š” ๊ธฐ๋Šฅ์ด ๋–จ์–ด์ง„๋‹ค. ์ด ๋•Œ๋ฌธ์— ํ˜ธํก๊ธฐ๋ฅผ ์ด‰์ด‰ํ•˜๊ฒŒ ์œ ์ง€ํ•ด์•ผ ํ•œ๋‹ค. ํ๋ฅด๋Š” ๋ฌผ์— ์ฝ”๋ฅผ ์ž์ฃผ ์”ป์œผ๋ฉด ๋ฏธ์„ธ๋จผ์ง€๋‚˜ ์„ธ๊ท  ๋“ฑ์ด ๋ฐ–์œผ๋กœ ๋‚˜๊ฐ€๋Š” ๋ฐ ๋„์›€์ด ๋œ๋‹ค. ๋งŒ์„ฑ ํ˜ธํก๊ธฐ ์งˆํ™˜์„ ์•“๋Š” ํ™˜์ž๋Š” ๋ชฉ ์•ˆ์ด ๊ฑด์กฐํ•˜๋ฉด ๊ธฐ์นจ ๋“ฑ์˜ ์ฆ์ƒ์ด ์‹ฌํ•ด์งˆ ์ˆ˜ ์žˆ๋‹ค. ๋ฌผ์„ ๋‘์„ธ ์ž” ์ •๋„ ์ฑ™๊ฒจ ๋งˆ์…”์•ผ ํ•œ๋‹ค.์ง‘์•ˆ์—๋งŒ ์žˆ๋‹ค๊ณ  ์•ˆ์‹ฌํ•ด์„  ์•ˆ ๋œ๋‹ค. ์ฒญ์†Œํ•  ๋•Œ๋Š” ์ฐฝ๋ฌธ์„ ๋‹ซ๊ณ  ํ•˜๋Š” ๊ฒŒ ๋‚ซ๋‹ค. ๋งŒ์„ฑ ํ˜ธํก๊ธฐ ์งˆํ™˜์ž๋ผ๋ฉด ์ผ๋ฐ˜ ์ฒญ์†Œ๊ธฐ ๋Œ€์‹  ๋ฏธ์„ธ๋จผ์ง€๋ฅผ ๊ฑธ๋Ÿฌ์ฃผ๋Š” ํŠน์ˆ˜ํ•„ํ„ฐ๊ฐ€ ๋‹ฌ๋ฆฐ ์ง„๊ณต์ฒญ์†Œ๊ธฐ๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•œ๋‹ค. ์นดํŽซ์ด๋‚˜ ์นจ๊ตฌ๋ฅ˜์—๋Š” ๋ฏธ์„ธ๋จผ์ง€๊ฐ€ ์‰ฝ๊ฒŒ ์Œ“์ผ ์ˆ˜ ์žˆ๋‹ค.์ด๋ฅผ ์˜ˆ๋ฐฉํ•˜๊ธฐ ์œ„ํ•ด ์„ฌ์œ  ์žฌ์งˆ ์นจ๊ตฌ๋ฅ˜ ๋“ฑ์€ ์ˆ˜๋‚ฉ์žฅ์— ๋„ฃ๊ฑฐ๋‚˜ ๋ฎ๊ฐœ๋ฅผ ์”Œ์›Œ ๋†“๋Š” ๊ฒƒ์ด ์ข‹๋‹ค. ๋ฏธ์„ธ๋จผ์ง€ ๋†๋„๊ฐ€ ๋‚ฎ์•„์ง€๊ฑฐ๋‚˜ ๋จผ์ง€ ์ฃผ์˜๋ณด๊ฐ€ ํ•ด์ œ๋˜๋ฉด ์ฐฝ๋ฌธ์„ ์—ด์–ด ํ™˜๊ธฐํ•ด์•ผ ํ•œ๋‹ค. ์นจ๊ตฌ๋ฅ˜ ๋“ฑ๋„ ํ„ธ์–ด ์‹ค๋‚ด์— ์Œ“์ธ ๋ฏธ์„ธ๋จผ์ง€๋ฅผ ์ œ๊ฑฐํ•ด์•ผ ํ•œ๋‹ค. - ํ‡ด๊ฑฐ ์œ„๊ธฐ์— ์ฒ˜ํ•œ ์•„๋™์ฃผ๊ฑฐ๋นˆ๊ณค๊ฐ€๊ตฌ๋ฅผ ์ง€์›ํ•˜๊ธฐ ์œ„ํ•ด ๊ธด๊ธ‰์ž„์‹œ์ฃผํƒ ์‚ฌ์—…์ด ์‹œ์ž‘๋œ๋‹ค. ์ดˆ๋ก์šฐ์‚ฐ์–ด๋ฆฐ์ด์žฌ๋‹จ(ํšŒ์žฅ ์ด์ œํ›ˆ), ํ™ˆ์•ค์‡ผํ•‘(๋Œ€ํ‘œ์ด์‚ฌ ๊น€์˜ฅ์ฐฌ), ๊ตฌ๋กœ๊ตฌ์ฒญ(๊ตฌ์ฒญ์žฅ ์ด์„ฑ), ์„œ์šธ์ฃผํƒ๋„์‹œ๊ณต์‚ฌ(์ดํ•˜ โ€˜SH๊ณต์‚ฌโ€™, ์‚ฌ์žฅ ๊น€์„ธ์šฉ)๋Š” 24์ผ(๋ชฉ) ๊ตฌ๋กœ๊ตฌ์ฒญ ๋ฅด๋„ค์ƒ์Šคํ™€์—์„œ ใ€Œ๊ตฌ๋กœ๊ตฌ ๊ธด๊ธ‰์ž„์‹œ์ฃผํƒ ์‚ฌ์—…ใ€์˜ ์—…๋ฌดํ˜‘์•ฝ์„ ์ง„ํ–‰ํ–ˆ๋‹ค. ๋ณธ ์—…๋ฌดํ˜‘์•ฝ์„ ํ†ตํ•ด ์ง€๋‚œ 7์›” 16์ผ ์‹œํ–‰๋œ ใ€Œ์„œ์šธํŠน๋ณ„์‹œ ์•„๋™ ์ฃผ๊ฑฐ๋นˆ๊ณค ํ•ด์†Œ๋ฅผ ์œ„ํ•œ ์ง€์› ์กฐ๋ก€์•ˆใ€์— ๋”ฐ๋ฅธ ์•„๋™ ๋Œ€์ƒ ์ฃผ๊ฑฐ ์ •์ฑ…์„ ํ˜„์‹คํ™” ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ๋‹ค. ์„œ์šธ์˜ ์ผ๋ถ€ ์ž์น˜๊ตฌ์™€ ์ง€์—ญ ์ฃผ๊ฑฐ๋ณต์ง€์„ผํ„ฐ์—์„œ๋Š” ๊ธฐ์กด์— ์•ฝ 40ํ˜ธ์˜ ๊ธด๊ธ‰์ž„์‹œ์ฃผํƒ์„ ์šด์˜ํ•˜๋ฉฐ ๊ฐ‘์ž‘์Šค๋Ÿฝ๊ฒŒ ํ‡ด๊ฑฐ ์œ„๊ธฐ์— ์ฒ˜ํ•œ ๊ฐ€๊ตฌ๋ฅผ ์œ„ํ•ด ์ž„์‹œ ์ฃผ๊ฑฐ ์‹œ์„ค์„ ์ œ๊ณตํ•ด์™”๋‹ค. ํ•˜์ง€๋งŒ ๋ฐ˜์ง€ํ•˜ ์ฃผํƒ ๋˜๋Š” ๋…ธํ›„ ๋œ ์ฃผํƒ์„ ๊ธด๊ธ‰์ž„์‹œ์ฃผํƒ์œผ๋กœ ํ™œ์šฉํ•˜๊ฑฐ๋‚˜ ๊ฐ€์กฑ ๋‹จ์œ„๋กœ ์ƒํ™œํ•  ์ˆ˜ ์—†๋Š” ์ข์€ ์ฃผํƒ์ธ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์•˜๋‹ค. ์ด ๋•Œ๋ฌธ์— ์•„๋™์ด ์žˆ๋Š” ๊ฐ€๊ตฌ๋ฅผ ์œ„ํ•œ ์•ˆ์ „ํ•œ ๊ธด๊ธ‰์ž„์‹œ์ฃผํƒ์ด ํ•„์š”ํ•œ ์ƒํ™ฉ์ด์—ˆ๋‹ค. ์ด๋ฒˆ ์—…๋ฌดํ˜‘์•ฝ์— ๋”ฐ๋ผ ํ™ˆ์•ค์‡ผํ•‘์ด ๊ฐ€์ „, ๊ฐ€๊ตฌ ๋“ฑ ๊ธด๊ธ‰์ž„์‹œ์ฃผํƒ์„ ์œ„ํ•ด ํ•„์š”ํ•œ ๋ฌผํ’ˆ์„ ํ›„์›ํ•˜๋ฉฐ ์ดˆ๋ก์šฐ์‚ฐ ์–ด๋ฆฐ์ด์žฌ๋‹จ์ด ํ›„์›๊ธˆ ์ง‘ํ–‰์„ ๋‹ด๋‹นํ•œ๋‹ค. SH๊ณต์‚ฌ๋Š” ๊ธด๊ธ‰์ž„์‹œ์ฃผํƒ ์šด์˜์„ ์œ„ํ•œ ๋งค์ž…์ž„๋Œ€์ฃผํƒ์„ ์œ ์ƒ ์ œ๊ณตํ•˜๊ณ  ๊ตฌ๋กœ๊ตฌ์ฒญ์€ ๊ธด๊ธ‰์ž„์‹œ์ฃผํƒ ์šด์˜๊ณผ ํ•จ๊ป˜ ์ฃผ๊ฑฐ์œ„๊ธฐ๊ฐ€๊ตฌ์˜ ์ฃผ๊ฑฐ ์ƒํ–ฅ์„ ์œ„ํ•ด ๋…ธ๋ ฅํ•˜๊ฒŒ ๋œ๋‹ค. ์„œ์šธ์— ์‚ฌ๋Š” ์ง€์€(๊ฐ€๋ช…)์ด๋„ค ๊ฐ€์กฑ์€ ์ฝ”๋กœ๋‚˜19 ์œ„๊ธฐ๋กœ ๋ถ€๋ชจ๋‹˜์˜ ์†Œ๋“์ด ์ค„์–ด๋“ค์–ด์„œ 5๊ฐœ์›”์˜ ์›”์„ธ๊ฐ€ ๋ฐ€๋ ค ํ‡ด๊ฑฐ ์œ„๊ธฐ์— ๋†“์—ฌ์žˆ์—ˆ์ง€๋งŒ ์ด๋ฒˆ ๊ธด๊ธ‰์ž„์‹œ์ฃผํƒ ์‚ฌ์—…์„ ํ†ตํ•ด ์ผ์‹œ์ ์œผ๋กœ ๊ฑฐ์ฃผ์ง€๋ฅผ ๋งˆ๋ จํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋๋‹ค. ๊ฑฐ์ฃผํ•˜๋Š” ๋™์•ˆ ๋‹ค์–‘ํ•œ ์ง€์›์ฒด๊ณ„๋ฅผ ์—ฐ๊ณ„ํ•ด ์•ˆ์ •์ ์ธ ์ฃผ๊ฑฐ ๊ณ„ํš์„ ์ˆ˜๋ฆฝํ•œ๋‹ค. ํ•œํŽธ ใ€Œ์„œ์šธํŠน๋ณ„์‹œ ์•„๋™ ์ฃผ๊ฑฐ๋นˆ๊ณค ํ•ด์†Œ๋ฅผ ์œ„ํ•œ ์ง€์› ์กฐ๋ก€์•ˆใ€ ์กฐ๋ก€ ์ œ์ • ๋ฐ ์ด๋ฒˆ ๊ธด๊ธ‰์ž„์‹œ์ฃผํƒ ์‚ฌ์—…์— ์ฐธ์—ฌํ•˜๋Š” ์ดˆ๋ก์šฐ์‚ฐ์–ด๋ฆฐ์ด์žฌ๋‹จ ์ด์ œํ›ˆ ํšŒ์žฅ์€ โ€œ์ฝ”๋กœ๋‚˜ ์ƒํ™ฉ์ด ์žฅ๊ธฐํ™” ๋˜๋ฉด์„œ ํ‡ด๊ฑฐ ์œ„๊ธฐ์— ๋†“์ธ ๊ฐ€๊ตฌ๊ฐ€ ๋Š˜๊ณ  ์žˆ์œผ๋ฉฐ, ์•„๋™์„ ๋™๋ฐ˜ํ•œ ๊ฐ€๊ตฌ๋Š” ํ‡ด๊ฑฐ ์ƒํ™ฉ์—์„œ ๊ฒช๋Š” ์–ด๋ ค์›€์ด ์ผ๋ฐ˜ ๊ฐ€๊ตฌ์— ๋น„ํ•ด ํฌ๋‹ค. ์ด๋ฒˆ ๊ตฌ๋กœ๊ตฌ์™€์˜ ์‚ฌ์—…์„ ์‹œ์ž‘์œผ๋กœ ๋” ๋งŽ์€ ์ž์น˜๊ตฌ์—์„œ ์‚ฌ์—…์ด ์ง„ํ–‰๋˜๊ธธ ๊ธฐ๋Œ€ํ•œ๋‹ค. ๋˜, ๊ธด๊ธ‰์ž„์‹œ์ฃผํƒ์— ์ž…์ฃผํ•œ ์œ„๊ธฐ ๊ฐ€๊ตฌ๊ฐ€ ๊ณต๊ณต์ž„๋Œ€์ฃผํƒ ๋ฐ ์ผ๋ฐ˜ ์ฃผ๊ฑฐ๋กœ์˜ ์ฃผ๊ฑฐ ์ƒํ–ฅ๊นŒ์ง€ ์ด๋ฅผ ์ˆ˜ ์žˆ๋„๋ก ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ๋„ ์ค‘์š”ํ•  ๊ฒƒ์ด๋‹ค. โ€๋ผ๊ณ  ๊ธฐ๋Œ€๊ฐ์„ ํ‘œ์‹œํ–ˆ๋‹ค. - ์ง€๋‚œ 12์ผ ์‹œ๊ณต์‚ฌ ์„ ์ • ์ž…์ฐฐ์„ ์‹œํ–‰ํ•œ ์„œ์šธ ํƒœ๋ฆ‰ํ˜„๋Œ€์•„ํŒŒํŠธ ์žฌ๊ฑด์ถ•์กฐํ•ฉ์€ ๊ฒฐ๊ตญ ๋˜๋‹ค์‹œ ๊ณต์‚ฌ๋ฅผ ๋งก์„ ์—…์ฒด๋ฅผ ๋ฝ‘๋Š” ๋ฐ ์‹คํŒจํ–ˆ๋‹ค. ์ด๋ฏธ ์„ธ ์ฐจ๋ก€๋‚˜ ์œ ์ฐฐ๋ผ ์ด๋ฒˆ์—๋Š” ์ˆ˜์˜๊ณ„์•ฝ ๋ฐฉ์‹์œผ๋กœ ๊ฑด์„ค์‚ฌ๋ฅผ ์ง€๋ช…ํ•  ์ˆ˜ ์žˆ์—ˆ์ง€๋งŒ ๊ทธ๋งˆ์ € ๊ด€์‹ฌ์„ ๋ณด์ด๋˜ A๊ฑด์„ค์‚ฌ๊ฐ€ ์ œ์•ˆ์„œ ์ œ์ถœ์„ ํฌ๊ธฐํ–ˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์กฐํ•ฉ ๊ด€๊ณ„์ž๋Š” โ€œ์ด์‚ฌํšŒ๋ฅผ ์—ด์–ด ์•ž์œผ๋กœ๋Š” ํ•œ ์—…์ฒด๊ฐ€ ๋‹จ๋…์œผ๋กœ ์ž…์ฐฐ์— ์ฐธ์—ฌํ•˜๋”๋ผ๋„ ์‹œ๊ณต์‚ฌ ์„ ์ •์„ ์œ„ํ•œ ์ฃผ๋ฏผ์ดํšŒ์— ์ƒ์ •ํ•ด ํ†ต๊ณผ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋„๋ก ํ•  ๊ณ„ํšโ€์ด๋ผ๊ณ  ๋งํ–ˆ๋‹ค.์žฌ๊ฐœ๋ฐœยท์žฌ๊ฑด์ถ•์กฐํ•ฉ๋“ค์ด ํ—Œ ์ง‘์„ ํ—๊ณ  ์ƒˆ ์ง‘์œผ๋กœ ์ง€์–ด์ค„ ๊ฑด์„ค์‚ฌ(์‹œ๊ณต์‚ฌ)๋ฅผ ๋ชจ์‹œ๋Š” ๋ฐ ์• ๋ฅผ ๋จน๊ณ  ์žˆ๋‹ค. ๋ถ€๋™์‚ฐ ์‹œ์žฅ ์นจ์ฒด๊ฐ€ ์žฅ๊ธฐํ™”๋˜๋ฉด์„œ ์ž๊ธˆ์‚ฌ์ •์ด ๋น ๋“ฏํ•ด์ง„ ๊ฑด์„ค์‚ฌ๋“ค์ด ๊ณ„์•ฝ์กฐ๊ฑด์ด ์œ ๋ฆฌํ•˜๊ฑฐ๋‚˜ ๋ถ„์–‘์„ฑ์ด ๋›ฐ์–ด๋‚œ ๋‹จ์ง€๋งŒ ๊ณจ๋ผ ์ˆ˜์ฃผํ•˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์ตœ๊ทผ ์šฉ์ธ2๊ตฌ์—ญ ์žฌ๊ฑด์ถ•์‚ฌ์—…๋„ ์ฐธ์—ฌ ์—…์ฒด๊ฐ€ ์—†์–ด ์‹œ๊ณต์‚ฌ ์„ ์ • ์ž…์ฐฐ์ด ์œ ์ฐฐ๋๋‹ค. ์˜ค๋Š” 22์ผ ์ž…์ฐฐ์ด ์‹ค์‹œ๋  ์˜ˆ์ •์ด๋˜ ๋ถ€์ฒœ ์›์ข…3D๊ตฌ์—ญ ๋„์‹œํ™˜๊ฒฝ์ •๋น„์‚ฌ์—…๋„ ์•ž์„œ ๊ฐœ์ตœํ•œ ํ˜„์žฅ์„ค๋ช…ํšŒ์—์„œ ๊ฑด์„ค์‚ฌ๊ฐ€ ๋‹จ ํ•œ ๊ณณ๋„ ๋‚˜ํƒ€๋‚˜์ง€ ์•Š์ž ์ž…์ฐฐ์ด ์ž๋™ ์œ ์ฐฐ๋๋‹ค. ์„œ์šธ ์ž์–‘1๊ตฌ์—ญ ์žฌ๊ฑด์ถ•์กฐํ•ฉ๋„ ์ž…์ฐฐ์ด ํ‘œ๋ฅ˜ ์ค‘์ด๋‹ค. ์„œ์šธ ๊ตฌ์‚ฐ1๊ตฌ์—ญ๊ณผ ํ™์ œ3๊ตฌ์—ญ์€ ์ˆ˜๋…„ ์ „ ์‹œ๊ณต์‚ฌ๋ฅผ ๊ต์ฒดํ•˜๋ ค๊ณ  ๊ณ„์•ฝ์„ ํ•ด์ง€ํ–ˆ๋‹ค๊ฐ€ ์ง€๊ธˆ๊นŒ์ง€ ๋‹ค๋ฅธ ๊ฑด์„ค์‚ฌ๋ฅผ ์ฐพ์ง€ ๋ชปํ•˜๊ณ  ์žˆ๋‹ค.โ€˜์‹œ๊ณต์‚ฌ ๋ชจ์‹œ๊ธฐโ€™๊ฐ€ ์–ด๋ ค์›€์— ์ฒ˜ํ•˜์ž ์ฃผ๋ฏผ๋“ค์ด ํ•œ ๋ฒˆ ๊ฑฐ์ ˆํ–ˆ๋˜ ์‹œ๊ณต์‚ฌ์— ๋‹ค์‹œ โ€˜๋Ÿฌ๋ธŒ์ฝœโ€™์„ ๋ณด๋‚ด๋Š” ๊ฒฝ์šฐ๋„ ๋‚˜ํƒ€๋‚˜๊ณ  ์žˆ๋‹ค. ์„œ์šธ ์ƒ๋„๋™ ๋Œ€๋ฆผ์•„ํŒŒํŠธ๋Š” ๋Œ€ํ˜•์ฃผํƒ ๋น„์œจ ๋“ฑ์„ ๋‘˜๋Ÿฌ์‹ผ ์„ค๊ณ„๋ณ€๊ฒฝ ๊ฑด์œผ๋กœ B๊ฑด์„ค์‚ฌ์™€ ๋ณธ๊ณ„์•ฝ ์ง์ „์— ๊ณ„์•ฝ์„ ํ•ด์ง€ํ•œ ๋ฐ” ์žˆ๋‹ค. ์ดํ›„ ์ƒˆ๋กœ์šด ์‹œ๊ณต์‚ฌ๋ฅผ ์ฐพ์•„๋‚˜์„ฐ์ง€๋งŒ ์ž…์ฐฐ์ด ๋ฒˆ๋ฒˆ์ด ์œ ์ฐฐ๋˜๋ฉด์„œ ๊ฒฐ๊ตญ B์—…์ฒด์— ๋‹ค์‹œ ์†์„ ๋‚ด๋ฐ€์—ˆ๋‹ค. ๊ธ€๋กœ๋ฒŒ ๊ธˆ์œต์œ„๊ธฐ ์ดํ›„ ์กฐํ•ฉ์ด ์‹œ๊ณต์‚ฌ๋ฅผ ๋ฐ”๊พธ๋Š” ์‚ฌ๋ก€๋Š” ์ ์ง€ ์•Š์•˜๋‹ค. ์‹œ๊ณต์‚ฌ๋Š” ๊ธˆ์œตํšŒ์‚ฌ์—์„œ ๋ˆ์„ ๋นŒ๋ ค ์กฐํ•ฉ์— ์‚ฌ์—…๋น„ ๋“ฑ์œผ๋กœ ๋Œ€์—ฌํ•ด ์ฃผ๋Š”๋ฐ ์žฌ๋ฌด์ƒํƒœ๊ฐ€ ๋‚˜๋น ์ง„ ๊ฑด์„ค์‚ฌ๋‚˜ ์‚ฌ์—…์„ฑ์ด ๋–จ์–ด์ง€๋Š” ์‚ฌ์—…์žฅ์—์„  ์ œ๋Œ€๋กœ ์ž๊ธˆ์ด ์ง€์›๋˜์ง€ ์•Š์•˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ๊ฑด์„ค์‚ฌ๋“ค๋„ ๊ณผ๊ฑฐ์ฒ˜๋Ÿผ ํฐ ์ž๊ธˆ์„ ์Ÿ์•„๋ถ€์œผ๋ฉฐ ์‚ฌ์—… ์ˆ˜์ฃผ๋ฅผ ์œ„ํ•œ ๊ธฐ๋“๊ถŒ ๊ตฌ์ถ•์—๋งŒ ๋งค๋‹ฌ๋ฆฌ์ง€ ์•Š๋Š”๋‹ค. ์†Œ์œ„ โ€˜๋  ์‚ฌ์—…โ€™์„ ๋ƒ‰์ •ํ•˜๊ฒŒ ๊ณ ๋ฅธ๋‹ค๋Š” ์˜๋ฏธ๋‹ค. ํ•œ ๋Œ€ํ˜• ๊ฑด์„ค์‚ฌ ์˜์—…๋‹ด๋‹น ์ƒ๋ฌด๋Š” โ€œ์—ญ์„ธ๊ถŒ๋„ ์•„๋‹ˆ๊ณ  ์‹œ์„ธ๋Š” ๋–จ์–ด์ง€๋Š”๋ฐ ์กฐํ•ฉ์›๋“ค์ด ๋น„์‹ผ ์ผ๋ฐ˜ ๋ถ„์–‘๊ฐ€๋ฅผ ๊ณ ์ง‘ํ•˜๋ฉด ๋‹ต์ด ์—†๋‹คโ€๋ฉฐ โ€œ์žฌ๊ฐœ๋ฐœยท์žฌ๊ฑด์ถ•์€ ์ฃผ๋ฏผ ๊ฐ„ ๊ฐˆ๋“ฑ์œผ๋กœ ์‚ฌ์—…์ด ๋Šฆ์–ด์ง€๋Š” ๋“ฑ ๋ฆฌ์Šคํฌ๊ฐ€ ํฌ๊ธฐ ๋•Œ๋ฌธ์— ๊ฑด์„ค์‚ฌ๋“ค์€ ์‹ ์ค‘ํžˆ ์‚ฌ์—…์žฅ์„ ๊ณ ๋ฅด๊ณ  ์žˆ๋‹คโ€๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. - source_sentence: ์˜ค์Šค๋งŒ ํŠ€๋ฅดํฌ์˜ ์˜ํ† ๊ฐ€ ์ถ•์†Œ๋œ ์›์ธ์€? sentences: - '๋ฏธ๊ตญ๊ณผ ์˜๊ตญ, ํ”„๋ž‘์Šค ๋“ฑ์ง€์—์„œ๋Š” ๋ฏผ์ฃผ์ฃผ์˜๊ฐ€ ๋ฐœ์ „ํ–ˆ๋‹ค. ์ผ๋ณธ์€ ์˜ค์„ธ์•„๋‹ˆ์•„์˜ ๊ตฐ๋„์— ๋Œ€ํ•œ ์ง€๋ฐฐ๊ถŒ์„ ํ™•๊ณ ํžˆ ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜์—ˆ๋‹ค. ํ•œํŽธ, ๋…์ผ์€ ๋ฒ ๋ฅด์‚ฌ์œ  ์กฐ์•ฝ์œผ๋กœ ๋ง๋ฏธ์•”์•„ ๋ฐ˜์„ฑ๋ณด๋‹ค ์ง€๋…ํ•œ ๊ฐ€๋‚œ๊ณผ ๋ฐฐ์ƒ๊ธˆ์— ๋Œ€ํ•œ ๊ฒƒ์— ์‹œ๋‹ฌ๋ ธ์œผ๋ฉฐ ์˜ค์Šค๋งŒ ํŠ€๋ฅดํฌ๋„ ์„ธ๋ธŒ๋ฅด ์กฐ์•ฝ์„ ๋งบ์Œ์œผ๋กœ์จ ์˜ํ† ๊ฐ€ ํฌ๊ฒŒ ์ค„์–ด๋“ค์—ˆ๋‹ค(1922๋…„ ํ•ด์ฒด, 1923๋…„ ํ„ฐํ‚ค ๊ณตํ™”๊ตญ ์ˆ˜๋ฆฝ). ์˜ค์ŠคํŠธ๋ฆฌ์•„์™€ ํ—๊ฐ€๋ฆฌ๋„ ๊ฐ๊ฐ ์ƒ์ œ๋ฅด๋งน ์กฐ์•ฝ, ํŠธ๋ฆฌ์•„๋† ์กฐ์•ฝ์„ ๋งบ์Œ์œผ๋กœ์จ ์˜ํ† ๊ฐ€ ํฌ๊ฒŒ ์ค„์–ด๋“ค์—ˆ๋‹ค. ๋ถˆ๊ฐ€๋ฆฌ์•„๋Š” ๋‡Œ์ด ์กฐ์•ฝ์œผ๋กœ ๋‚จ๋„๋ธŒ๋ฃจ์ž๋ฅผ ๋ฃจ๋งˆ๋‹ˆ์•„์— ๋–ผ์–ด์ฃผ์—ˆ๋‹ค. ์ดํƒˆ๋ฆฌ์•„๋Š” ์Šน์ „๊ตญ์ด์—ˆ์œผ๋‚˜ ์—ฐํ•ฉ๊ตญ์—๊ฒŒ ์˜ํ† ๋ฅผ ๋ณด์žฅ๋ฐ›๊ธฐ๋Š”์ปค๋…• ๋ƒ‰๋Œ€๋ฅผ ๋ฐ›์•˜๋‹ค. ๊ฒฐ๊ตญ 1922๋…„์— ๋ฒ ๋‹ˆํ†  ๋ฌด์†”๋ฆฌ๋‹ˆ์— ์˜ํ•œ ํŒŒ์‹œ์ŠคํŠธ ์ •๊ถŒ์ด ์ˆ˜๋ฆฝ๋œ๋‹ค. ์ดํƒˆ๋ฆฌ์•„์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์ค‘๊ตญ์€ ์—ฐํ•ฉ๊ตญ์ž„์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  ์‚ฐ๋‘ฅ ๋ฐ˜๋„์— ๋Œ€ํ•œ ์ด๊ถŒ์„ ๋Œ๋ ค๋ฐ›์ง€ ๋ชปํ•˜์˜€๋‹ค. ์šฐ๋“œ๋กœ ์œŒ์Šจ์˜ ๋ฏผ์กฑ์ž๊ฒฐ์ฃผ์˜ ์›์น™์— ๋”ฐ๋ผ ์ค‘์•™์œ ๋Ÿฝ์˜ ๋งŽ์€ ๊ตญ๊ฐ€๋Š” ๋…๋ฆฝํ•˜์˜€์œผ๋ฉฐ, ๋…๋ฆฝ์„ ์กฐ๊ฑด์œผ๋กœ ์˜๊ตญ์„ ๋„์™”๋˜ ์ธ๋„๋Š” ๊ทธ ์•ฝ์†์ด ๋ฌด์‚ฐ๋˜์ž ์ง€์†์ ์ธ ํˆฌ์Ÿ ์šด๋™์„ ์‹œ์ž‘ํ–ˆ๋‹ค. ํ•œํŽธ, ์šฐ๋“œ๋กœ ์œŒ์Šจ ๋ฏธ๊ตญ ๋Œ€ํ†ต๋ น์€ ๋ฏผ์กฑ ์ž๊ฒฐ์ฃผ์˜๋ฅผ ์ œ์ฐฝํ•˜์˜€์œผ๋ฉฐ, ์ „์Ÿ์˜ ๋ฐฉ์ง€์™€ ์„ธ๊ณ„์˜ ํ‰ํ™”๋ฅผ ์œ„ํ•ด ๊ตญ์ œ ์—ฐ๋งน์„ ์„ค๋ฆฝํ•  ๊ฒƒ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด๋กœ์จ ๊ตญ์ œ ์—ฐ๋งน์ด ์„ค๋ฆฝ๋˜์—ˆ์œผ๋‚˜, ์ •์ž‘ ๋ฏธ๊ตญ์€ ์˜ํšŒ์˜ ๋ฐ˜๋Œ€๋กœ ๊ฐ€์ž…์— ์‹คํŒจํ•˜์˜€๋‹ค. ๊ฒฐ๊ตญ ๋‹ค์‹œ ๊ณ ๋ฆฝ์˜ ๊ธธ์„ ๊ฑธ์—ˆ๋‹ค.' - ๋‚ด๊ณผ ์™ธ๊ณผ ์†Œ์•„์ฒญ์†Œ๋…„๊ณผ ์‚ฐ๋ถ€์ธ๊ณผ ๋งˆ์ทจํ†ต์ฆ์˜ํ•™๊ณผ ๋“ฑ 5๊ฐœ โ€˜ํ•„์ˆ˜ ์ง„๋ฃŒ๊ณผ๋ชฉโ€™ ์ „๋ฌธ์˜(้†ซ)๋ฅผ ๋ชจ๋‘ ๊ฐ–์ถ”์ง€ ๋ชปํ•œ ์‹œยท๊ตฐยท๊ตฌ๊ฐ€ ์ „๊ตญ 251๊ณณ ๊ฐ€์šด๋ฐ 27๊ณณ์— ๋‹ฌํ–ˆ๋‹ค. ๊ตํ†ต์‚ฌ๊ณ ๋‚˜ ์‹ฌ๊ทผ๊ฒฝ์ƒ‰ ๋“ฑ์œผ๋กœ ์‘๊ธ‰์‹ค์„ ์ฐพ์€ ํ™˜์ž๋ฅผ ์ „๋‹ดํ•˜๋Š” ์‘๊ธ‰์˜ํ•™๊ณผ ์ „๋ฌธ์˜๊ฐ€ ์—†๋Š” ์ง€๋ฐฉ์ž์น˜๋‹จ์ฒด๋„ 51๊ณณ์ด์—ˆ๋‹ค. โ–ถ๊ด€๋ จ๊ธฐ์‚ฌ A3๋ฉด๊ฑด๊ฐ•๋ณดํ—˜์‹ฌ์‚ฌํ‰๊ฐ€์›์˜ ์‹œยท๊ตฐยท๊ตฌ๋ณ„ โ€˜์ „๋ฌธ๊ณผ๋ชฉ๋ณ„ ์ „๋ฌธ์˜ ์ธ์› ํ˜„ํ™ฉโ€™๊ณผ โ€˜ํ‘œ์‹œ๊ณผ๋ชฉ๋ณ„ ์˜์› ํ˜„ํ™ฉโ€™์— ๋”ฐ๋ฅด๋ฉด ์ „๋ฌธ์˜ ์ˆ˜๋Š” ์ตœ๊ทผ 5๋…„(2009~2013๋…„) ์‚ฌ์ด์— 1๋งŒ๋ช… ๋„˜๊ฒŒ ์ฆ๊ฐ€ํ–ˆ๋Š”๋ฐ๋„ ํ•„์ˆ˜ ์ง„๋ฃŒ๊ณผ๋ชฉ ์ „๋ฌธ์˜๋ฅผ ๋‹ค ๊ฐ–์ถ”์ง€ ๋ชปํ•œ ์ง€๋ฐฉ์ž์น˜๋‹จ์ฒด๋Š” ์˜คํžˆ๋ ค 4๊ณณ ๋Š˜์—ˆ๋‹ค. ํ•„์ˆ˜ ์ง„๋ฃŒ๊ณผ๋ชฉ์ด๋ž€ โ€˜์‘๊ธ‰์˜๋ฃŒ์— ๊ด€ํ•œ ๋ฒ•๋ฅ  ์‹œํ–‰๊ทœ์น™โ€™์—์„œ ์‘๊ธ‰์˜๋ฃŒ๊ธฐ๊ด€์— ๋‹น์ง ์ „๋ฌธ์˜๋ฅผ ๋ฐ˜๋“œ์‹œ ๋‘๋„๋ก ํ•œ 5๊ฐœ ์ง„๋ฃŒ๊ณผ๋ชฉ์„ ๋งํ•œ๋‹ค.๊ฒฝ๋ถ ์˜์–‘๊ตฐ์€ ๋‚ด๊ณผ๋ฅผ ์ œ์™ธํ•œ ๋ชจ๋“  ํ•„์ˆ˜ ์ง„๋ฃŒ๊ณผ๋ชฉ์—์„œ ์ „๋ฌธ์˜๊ฐ€ ํ•œ ๋ช…๋„ ์—†์—ˆ๋‹ค. ๊ฐ•์› ์–‘์–‘๊ตฐ์€ ์‚ฐ๋ถ€์ธ๊ณผ ์†Œ์•„์ฒญ์†Œ๋…„๊ณผ ๋งˆ์ทจํ†ต์ฆ์˜ํ•™๊ณผ ๋“ฑ 3๊ฐœ ํ•„์ˆ˜๊ณผ๋ชฉ ์ „๋ฌธ์˜๊ฐ€ ์—†๋‹ค. ์™ธ๊ณผ ์ „๋ฌธ์˜๊ฐ€ ์—†๋Š” ์ง€์ž์ฒด๋Š” ๊ฒฝ๋ถ ๋ด‰ํ™”๊ตฐยท์šธ๋ฆ‰๊ตฐ ๋“ฑ 3๊ณณ์ด์—ˆ๊ณ  ๋งˆ์ทจํ†ต์ฆ์˜ํ•™๊ณผ ์ „๋ฌธ์˜๊ฐ€ ์—†๋Š” ๊ณณ๋„ ๊ฐ•์› ์–‘๊ตฌ๊ตฐ, ์ถฉ๋ถ ๋‹จ์–‘๊ตฐ ๋“ฑ 9๊ณณ์— ๋‹ฌํ–ˆ๋‹ค.์†Œ์•„์ฒญ์†Œ๋…„๊ณผ ์ „๋ฌธ์˜๋ฅผ ์ฐพ์„ ์ˆ˜ ์—†๋Š” ์ง€์ž์ฒด๋Š” ์ถฉ๋ถ ๋ณด์€๊ตฐยท๊ดด์‚ฐ๊ตฐ๊ณผ ์ „๋ถ ์ง„์•ˆ๊ตฐ ๋“ฑ 14๊ณณ, ์‚ฐ๋ถ€์ธ๊ณผ ์ „๋ฌธ์˜๊ฐ€ ์—†๋Š” ๊ณณ์€ ๊ฒฝ๋ถ ๊ณ ๋ น๊ตฐยท์˜์„ฑ๊ตฐ๊ณผ ์ „๋‚จ ๊ตฌ๋ก€๊ตฐ ๋“ฑ 12๊ณณ์ด์—ˆ๋‹ค. - โ€œโ€˜์ปดํ“จํ„ฐ๊ฐ€ ๊ทธ๋ฆฌ ์ข‹์œผ๋ฉด ํ•™๊ต๋ฅผ ๊ทธ๋งŒ๋‘๋ผโ€™๋Š” ์—„๋งˆ์˜ ์กฐ์–ธ์ด ํ…€๋ธ”๋Ÿฌ ์„ค๋ฆฝ์˜ ๋ฐœ๋‹จ์ด ๋๋‹ค.โ€์•ผํ›„๊ฐ€ ์ตœ๊ทผ ์ธ์ˆ˜ํ•œ ๋งˆ์ดํฌ๋กœ ๋ธ”๋กœ๊น… ์‚ฌ์ดํŠธ โ€˜ํ…€๋ธ”๋Ÿฌโ€™ ์ฐฝ์—…์ž ๋ฐ์ด๋น„๋“œ ์นดํ”„(์‚ฌ์ง„)์˜ ์„ฑ๊ณต ์š”์ธ์„ ๋‘๊ณ  ์ฃผ์š” ์™ธ์‹ ๋“ค์ด ์ „ํ•œ ๋ง์ด๋‹ค. ์—ฌ๋А ๋ถ€๋ชจ์™€ ๋‹ฌ๋ฆฌ ์ปดํ“จํ„ฐ์— ๋น ์ ธ ์‚ด๋˜ 14์„ธ ์†Œ๋…„์ด ์ž์‹ ์ด ์›ํ•˜๋Š” ์ผ์— ๋ชฐ๋‘ํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ณผ๊ฐํ•˜๊ฒŒ ํ•™๊ต ์ค‘ํ‡ด๋ฅผ ๊ถŒ์œ ํ•œ ์—„๋งˆ์˜ ๊ฒฐ์ •์ด 20๋Œ€ ์–ต๋งŒ์žฅ์ž ํƒ„์ƒ์˜ ๋ฐ‘๊ฑฐ๋ฆ„์ด ๋๋‹ค๋Š” ๊ฒƒ์ด๋‹ค.22์ผ ๋‰ด์š•ํƒ€์ž„์Šค ๋“ฑ์— ๋”ฐ๋ฅด๋ฉด ๊ทธ๋Š” 2000๋…„ ๋‰ด์š•์˜ ์ผ๋ฅ˜ ๊ณต๋ฆฝํ•™๊ต์ธ ๋ธŒ๋กฑํฌ์Šค๊ณผํ•™๊ณ ์— ๋‹ค๋…”๋‹ค. ๋‹น์‹œ 14์„ธ์ธ ์นดํ”„๋Š” ๋จธ๋ฆฌ๊ฐ€ ์ด๋ช…ํ–ˆ์ง€๋งŒ, ๋‚ด์„ฑ์ ์ธ ๋ฐ๋‹ค ํ•˜๋ฃจ ์ข…์ผ ์ปดํ“จํ„ฐ์— ๋น ์ ธ ์‚ด์•˜๋‹ค.๊ทธ์˜ ์—„๋งˆ ๋ฐ”๋ฒ„๋ผ ์—์ด์ปค๋จผ์€ โ€œ์นดํ”„๋Š” 10๋Œ€ ์†Œ๋…„์ด ๊ทธ๋ ‡๋“ฏ ์—ฌ์ž์นœ๊ตฌ์™€ ๋น„๋””์˜ค ๊ฒŒ์ž„์„ ์ข‹์•„ํ–ˆ์ง€๋งŒ, ์ปดํ“จํ„ฐ๋งŒํผ ๊ทธ๋ฅผ ๋งคํ˜น์‹œํ‚ค์ง€๋Š” ๋ชปํ–ˆ๋‹คโ€๋ฉฐ โ€œ๊ทธ์˜ ์—ด์ •์„ ์‚ด๋ฆด ๊ณต๊ฐ„์ด ํ•„์š”ํ–ˆ๋‹คโ€๊ณ  ํšŒ๊ณ ํ–ˆ๋‹ค. ๋‹น์‹œ ์‚ฌ๋ฆฝํ•™๊ต์˜ ๊ณผํ•™๊ต์‚ฌ์˜€๋˜ ์—์ด์ปค๋จผ์€ ์•„๋“ค์ด ํ•™๊ต๋ฅผ ์ค‘ํ‡ดํ•˜๋Š” ๋Œ€์‹  ํ™ˆ์Šค์ฟจ์„ ํ†ตํ•ด ํ•™์—…์„ ๊ณ„์†ํ•˜๋„๋ก ํ–ˆ๋‹ค. ์ด์ •์„  ๊ธฐ์ž [email protected] - source_sentence: ๋…น์ง€ ํ”„๋ฆฌ๋ฏธ์—„ ๋‹จ์ง€'๋ผ๊ณ  ๋ถˆ๋ฆฌ๋Š” ์•„ํŒŒํŠธ์—์„œ ๊ฑธ์–ด์„œ ๊ฐˆ ์ˆ˜ ์žˆ๋Š” ์—ญ์˜ ์ด๋ฆ„์€? sentences: - ์กฐํ™˜์ต ํ•œ๊ตญ์ „๋ ฅ ์‚ฌ์žฅ(์‚ฌ์ง„)์ด ์„œ์šธ ์‚ผ์„ฑ๋™ ํ•œ์ „ ๋ณธ์‚ฌ ๊ฑด๋ฌผ์„ ๋งค๊ฐํ•˜์ง€ ์•Š๊ฒ ๋‹ค๋Š” ๋œป์„ ๋‚ด๋น„์ณค๋‹ค. โ€˜์„ ๋งค๊ฐ ํ›„์ด์ „โ€™์ด๋ผ๋Š” ์ •๋ถ€์˜ ๊ณต๊ณต๊ธฐ๊ด€ ํ˜์‹ ๋„์‹œ ์ด์ „ ๋ฐฉ์นจ๊ณผ ๊ฑฐ๋ฆฌ๊ฐ€ ์žˆ๋Š” ๋ฐ๋‹ค ์‚ผ์„ฑ์ƒ๋ช… KB๊ธˆ์œต ๋“ฑ์ด ์ด ๋ถ€์ง€๋ฅผ ๋งค์ž…ํ•˜๊ธฐ ์œ„ํ•ด ๋ฌผ๋ฐ‘ ๊ฒฝ์Ÿ์„ ๋ฒŒ์ด๋Š” ์ƒํ™ฉ์ด์–ด์„œ ํŒŒ์žฅ์ด ์˜ˆ์ƒ๋œ๋‹ค. ์กฐ ์‚ฌ์žฅ์€ 29์ผ ์ง€์‹๊ฒฝ์ œ๋ถ€ ์ถœ์ž…๊ธฐ์ž๋“ค๊ณผ ๋งŒ๋‚˜ ํ•œ์ „ ๋ณธ์‚ฌ ๊ฑด๋ฌผ์— ๋Œ€ํ•ด โ€œ(์ง€๊ธˆ์œผ๋กœ์„œ๋Š”) ๋งค๊ฐํ•  ์ƒ๊ฐ์ด ์—†๋‹คโ€๋ฉฐ โ€œ์ผ๋ฐ˜๋งค๊ฐ๋ณด๋‹ค๋Š” ํ–ฅํ›„ ๊ฐœ๋ฐœ์„ ํ†ตํ•ด ์ˆ˜์ต์„ ์ฐฝ์ถœํ•˜๋Š” ๋ฐฉ์•ˆ์„ ์ •๋ถ€์™€ ํ˜‘์˜ํ•˜๊ฒ ๋‹คโ€๊ณ  ๋งํ–ˆ๋‹ค. ํ•œ์ „์€ ๋‚ด๋…„ 8์›” ์ „๋‚จ ๋‚˜์ฃผ ํ˜์‹ ๋„์‹œ๋กœ ์ด์ „์ด ์˜ˆ์ •๋ผ ์žˆ๋‹ค. ์ •๋ถ€ ๋ฐฉ์นจ์— ๋”ฐ๋ฅด๋ฉด ํ˜์‹ ๋„์‹œ๋กœ ๋ณธ์‚ฌ๋ฅผ ์˜ฎ๊ธฐ๋Š” ๊ณต๊ณต๊ธฐ๊ด€์€ ์ด์ „ ์ „์— ๋ณธ์‚ฌ ๊ฑด๋ฌผ์„ ๋งค๊ฐํ•ด์•ผ ํ•œ๋‹ค. ์„œ์šธ ์‚ผ์„ฑ๋™ ํ•œ์ „ ๋ณธ์‚ฌ ๋ถ€์ง€๋Š” 7934ใŽก ๊ทœ๋ชจ๋กœ ์‹œ๊ฐ€ 3์กฐ์›์œผ๋กœ ์ถ”์ •๋œ๋‹ค. ์„œ์šธ ๊ฐ•๋‚จ๊ถŒ์— ์œ„์น˜ํ•œ ์‚ฌ์‹ค์ƒ ๋งˆ์ง€๋ง‰ ๊ธˆ์‹ธ๋ผ๊ธฐ ๋•…์ด๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์กฐ ์‚ฌ์žฅ์€ ๋˜ ์—ฐ๋‚ด ์ „๊ธฐ์š”๊ธˆ ์ถ”๊ฐ€ ์ธ์ƒ ๊ฐ€๋Šฅ์„ฑ์— ๋Œ€ํ•ด โ€œ๋‹จ์ •์ ์œผ๋กœ ์ด์•ผ๊ธฐํ•  ์ˆ˜ ์—†์ง€๋งŒ ํ˜„์žฌ๋กœ์„œ๋Š” ์ „๊ธฐ์š”๊ธˆ์„ ์ถ”๊ฐ€๋กœ ์ธ์ƒํ•  ์ƒ๊ฐ์ด ์—†๋‹คโ€๊ณ  ๋ฐํ˜”๋‹ค. ์ „๊ธฐ์š”๊ธˆ ๋ˆ„์ง„์ œ ์ถ•์†Œ์™€ ๊ด€๋ จํ•ด์„œ๋Š” โ€œ๋ˆ„์ง„์ œ๋ฅผ ํ†ตํ•ด ๋งˆ๋ จํ•œ ์žฌ์›์œผ๋กœ ๋นˆ๋ฏผ๋“ค์—๊ฒŒ ์‹ธ๊ฒŒ ์ „๊ธฐ๋ฅผ ๊ณต๊ธ‰ํ•˜๋Š” ๊ฒƒ์€ ์ข‹๋‹คโ€๋ฉด์„œ๋„ โ€œ(์ผ๋ถ€ ๊ณ„์ธต์„ ๋Œ€์ƒ์œผ๋กœ) ๊ณผ๋„ํ•œ ์š”๊ธˆ์„ ์ฑ…์ •ํ•˜๋Š” ๊ฒƒ์€ ๋ฌธ์ œ๊ฐ€ ์žˆ๋‹ค๊ณ  ๋ณธ๋‹คโ€๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. - LG์ƒํ™œ๊ฑด๊ฐ•์˜ ํ•œ๋ฐฉ ํ™”์žฅํ’ˆ ๋ธŒ๋žœ๋“œ โ€˜ํ›„โ€™๋Š” ์ง€๋‚œ๋‹ฌ ๋ง ๋ชจ๋ธ ์ด์˜์•  ์”จ์™€ 11๋…„ ์—ฐ์†์œผ๋กœ ๊ณ„์•ฝ์„ ๊ฐฑ์‹ ํ–ˆ๋‹ค. ํ›„๊ฐ€ ์—ฐ๋งค์ถœ ์•ฝ 4300์–ต์›(์ง€๋‚œํ•ด ๊ธฐ์ค€)์˜ ๋Œ€ํ˜• ๋ธŒ๋žœ๋“œ๋กœ ์„ฑ์žฅํ•˜๊ธฐ๊นŒ์ง€ ์ค‘ํ™”๊ถŒ ํ•œ๋ฅ˜์Šคํƒ€์ธ ์ด์”จ์˜ ๊ณต๋กœ๊ฐ€ ์ปธ๋‹ค๋Š” ์ด์œ ์—์„œ๋‹ค. ํ›„์˜ ๋Œ€ํ‘œ ์ œํ’ˆ์ธ โ€˜๋น„์ฒฉ์ž์ƒ ์—์„ผ์Šคโ€™๊ฐ€ โ€˜์ด์˜์•  ์—์„ผ์Šคโ€™๋ผ๋Š” ๋ณ„์นญ์œผ๋กœ ๋ถˆ๋ฆด ์ •๋„๋กœ ์–‘์ธก์€ ๋ˆ๋ˆํ•œ ๊ด€๊ณ„๋ฅผ ์ด์–ด์˜ค๊ณ  ์žˆ๋‹ค.๋น ๋ฅด๊ฒŒ ๋ณ€ํ•˜๋Š” ์œ ํ–‰๋งŒํผ ๋ชจ๋ธ๋„ ์ž์ฃผ ๋ฐ”๋€Œ๋Š” ํ™”์žฅํ’ˆ์—…๊ณ„์—์„œ 10๋…„ ์ด์ƒ ์žฅ์ˆ˜ํ•˜๋Š” โ€˜๋Œ€๊ธฐ๋กโ€™์„ ์“ด ์—ฐ์˜ˆ์ธ์ด ์†์† ๋“ฑ์žฅํ•˜๊ณ  ์žˆ๋‹ค.์ด์”จ ๋ชป์ง€์•Š์€ ์žฅ์ˆ˜๋ชจ๋ธ๋กœ 10๋…„์งธ SK-โ…ก ๋ชจ๋ธ๋กœ ํ™œ๋™ ์ค‘์ธ ๊น€ํฌ์•  ์”จ๊ฐ€ ๋Œ€ํ‘œ์ ์ด๋‹ค. โ€œ๋†“์น˜์ง€ ์•Š์„ ๊ฑฐ์˜ˆ์š”โ€๋ผ๋Š” ๊น€์”จ์˜ ๊ด‘๊ณ ๋ฌธ๊ตฌ๋Š” SK-โ…ก์˜ ์ƒ์ง•์ด ๋๋‹ค. ํšŒ์‚ฌ ์ธก์€ โ€œSK-โ…ก์™€ ๊น€์”จ๋Š” ์ด์ œ ๋ธŒ๋žœ๋“œ์™€ ๋ชจ๋ธ์˜ ๊ด€๊ณ„๋ฅผ ๋„˜์–ด โ€˜๊ฐ€์กฑโ€™์ด๋ผ๊ณ  ํ‘œํ˜„ํ•ด์•ผ ํ•  ์ •๋„โ€๋ผ๊ณ  ํ–ˆ๋‹ค. ๊ตญ๋‚ด ํ™”์žฅํ’ˆ ๊ด‘๊ณ  ์—ญ์‚ฌ์ƒ ์ตœ์žฅ์ˆ˜ ๊ด‘๊ณ ๋ชจ๋ธ์€ ์ฑ„์‹œ๋ผ ์”จ๋กœ ์•Œ๋ ค์กŒ๋‹ค. 1991๋…„๋ถ€ํ„ฐ 2006๋…„๊นŒ์ง€ 15๋…„ ๋™์•ˆ ์ฝ”๋ฆฌ์•„๋‚˜ ๋ชจ๋ธ๋กœ ํ™œ๋™ํ–ˆ๋‹ค.ํ™”์žฅํ’ˆ ๊ด‘๊ณ ์— ์ž์ฃผ ๋“ฑ์žฅํ•˜๋Š” ์ „์ง€ํ˜„ ์ด๋‚˜์˜ ์†กํ˜œ๊ต ๋“ฑ์€ โ€˜ํŠนA๊ธ‰ ๋ชจ๋ธโ€™์ž„์€ ๋ถ„๋ช…ํ•˜์ง€๋งŒ ๋ธŒ๋žœ๋“œ๋ฅผ ์—ฌ๋Ÿฌ ์ฐจ๋ก€ ๊ฐˆ์•„ํƒ”๋‹ค. ์ „์”จ๋Š” ์—๋›ฐ๋“œ ๋ผ๋„ค์ฆˆ ํ•œ์œจ ์ผ๋ฆฌ ํ—ค๋ผ, ์ด์”จ๋Š” ๋ผ๋„ค์ฆˆ ์•„์ด์˜คํŽ˜ ๋ž‘์ฝค ์ˆจ, ์†ก์”จ๋Š” ์—๋›ฐ๋“œ ์ด๋‹ˆ์Šคํ”„๋ฆฌ ๋ผ๋„ค์ฆˆ ๋“ฑ ๋‹ค์–‘ํ•œ ๋ธŒ๋žœ๋“œ์˜ ๋ชจ๋ธ๋กœ ํ™œ๋™ํ–ˆ๋‹ค.๊น€ํƒœํฌ ์”จ๋Š” 2004๋…„ LG์ƒํ™œ๊ฑด๊ฐ• ์˜คํœ˜ ๋ชจ๋ธ๋กœ ํ™œ๋™ํ•˜๋‹ค๊ฐ€ 2006๋…„ ์•„๋ชจ๋ ˆํผ์‹œํ”ฝ ํ—ค๋ผ๋กœ ๋ฐ”๊พธ๊ณ , 2011๋…„ ๋‹ค์‹œ ์˜คํœ˜๋กœ ๋ณต๊ท€ํ•œ ๋…ํŠนํ•œ ์‚ฌ๋ก€๋‹ค. ์ด ๊ณผ์ •์—์„œ ์•„๋ชจ๋ ˆํผ์‹œํ”ฝ๊ณผ LG์ƒํ™œ๊ฑด๊ฐ•์ด ๊ฑฐ์•ก์˜ ๋ชจ๋ธ๋ฃŒ๋ฅผ ์ œ์‹œํ•˜๋ฉฐ ์น˜์—ดํ•œ โ€˜๊น€ํƒœํฌ ์Ÿํƒˆ์ „โ€™์„ ๋ฒŒ์ด๊ธฐ๋„ ํ–ˆ๋‹ค.ํ™”์žฅํ’ˆ์—…๊ณ„ ๊ด€๊ณ„์ž๋Š” โ€œํ™”์žฅํ’ˆ ๋ธŒ๋žœ๋“œ๊ฐ€ ๋งŽ์•„์ง€๋ฉด์„œ ๋ชจ๋ธ ๊ณ„์•ฝ์„ ํ•  ์—ฐ์˜ˆ์ธ์ด โ€˜๋™์ด ๋‚ฌ๋‹คโ€™๋Š” ์–˜๊ธฐ๊ฐ€ ๋‚˜์˜จ ์ง€ ์˜ค๋ž˜โ€๋ผ๋ฉฐ โ€œ1๋…„ ์•ˆํŒŽ์˜ ๋‹จ๋ฐœ๊ณ„์•ฝ์ด ๋Œ€๋ถ€๋ถ„์ด๋ผ ํ•œ ๋ธŒ๋žœ๋“œ์—์„œ ์žฅ์ˆ˜๋ชจ๋ธ๋กœ ํ™œ๋™ํ•˜๋Š” ๊ฒƒ์€ ๋Œ€๋‹จํžˆ ์–ด๋ ค์šด ์ผโ€์ด๋ผ๊ณ  ๋งํ–ˆ๋‹ค. - ๊ฒฝ๊ธฐ ์šฉ์ธ์‹œ๋Š” ์„ฑ๋‚จ ๋ถ„๋‹น์‹ ๋„์‹œ์™€ ๊ฐ€๊นŒ์šด ์ง€๋ฆฌ์  ์ด์  ๋•๋ถ„์— 2000๋…„๋Œ€ ์ค‘๋ฐ˜ โ€˜๋ฒ„๋ธ”์„ธ๋ธโ€™์œผ๋กœ ๋ถˆ๋ฆฌ๋ฉฐ ์ˆ˜๋„๊ถŒ ์ฃผํƒ์‹œ์žฅ์„ ์ฃผ๋„ํ–ˆ์ง€๋งŒ ๊ณผ์ž‰๊ณต๊ธ‰๊ณผ 2008๋…„ ๊ธˆ์œต์œ„๊ธฐ ์—ฌํŒŒ๋กœ ๋ฏธ๋ถ„์–‘์ด ๊ธ‰์ฆํ•˜๋ฉด์„œ โ€˜๋ถˆ ๊บผ์ง„ ์ง‘โ€™์ด ์†์ถœํ–ˆ๋‹ค. ์ˆ˜๋„๊ถŒ ๋‚ด ๋Œ€ํ‘œ์ ์ธ ๋ฏธ๋ถ„์–‘ ์ง€์—ญ์œผ๋กœ ๊ผฝํ˜”๋‹ค.๊ทธ๋žฌ๋˜ ์šฉ์ธ ์ง€์—ญ ๋ถ„์œ„๊ธฐ๊ฐ€ ๋‹ฌ๋ผ์กŒ๋‹ค. ์ˆ˜๋„๊ถŒ ์ „์„ธ๋‚œ์œผ๋กœ ๋งค๋งค ์ „ํ™˜ ์ˆ˜์š”๊ฐ€ ๋Š˜๋ฉด์„œ ์ง€๋‚œํ•ด 1๋งŒ9055๊ฐ€๊ตฌ์˜ ์•„ํŒŒํŠธ๊ฐ€ ๊ฑฐ๋ž˜๋ผ ์ˆ˜์›์‹œ(2๋งŒ280๊ฐ€๊ตฌ)์— ์ด์–ด ์ˆ˜๋„๊ถŒ ์•„ํŒŒํŠธ ๊ฑฐ๋ž˜๋Ÿ‰ 2์œ„์— ์˜ฌ๋ž๋‹ค. ์ฒญ์•ฝ ์—ด๊ธฐ๋„ ๋‹ฌ์•„์˜ฌ๋ผ ์ง€๋‚œ์ฃผ ๋ถ„์–‘ํ•œ ํ’๋•์ฒœ๋™ โ€˜eํŽธํ•œ์„ธ์ƒ ์ˆ˜์ง€โ€™๋Š” ํ‰๊ท  8.29 ๋Œ€ 1๋กœ 1์ˆœ์œ„์—์„œ ๋งˆ๊ฐ๋๋‹ค.์šฉ์ธ์—์„œ ์ตœ๊ทผ ๊ฐ€์žฅ ์ฃผ๋ชฉ๋ฐ›๊ณ  ์žˆ๋Š” ๊ณณ์€ ์ฒ˜์ธ๊ตฌ ์—ญ๋ถ์ง€๊ตฌ๋‹ค. ์šฉ์ธ์‹œ์ฒญ๊ณผ ์šฉ์ธ๊ต์œก์ฒญ, ์šฉ์ธ๋™๋ถ€๊ฒฝ์ฐฐ์„œ ๋“ฑ์ด ์ž…์ฃผํ•œ ์šฉ์ธํ–‰์ •ํƒ€์šด๊ณผ ๊ฐ€๊น๊ณ  ์ธ๊ทผ ์—ญ์‚ผ์ง€๊ตฌ์™€ ํ•จ๊ป˜ 1๋งŒ์—ฌ๊ฐ€๊ตฌ ๋Œ€๊ทœ๋ชจ ์ฃผ๊ฑฐ๋‹จ์ง€๋กœ ๊ฐœ๋ฐœ๋˜๊ณ  ์žˆ๋‹ค. ์ˆ˜์ง€์™€ ๋™๋ฐฑ์— ์ด์–ด ์šฉ์ธ์„ ๋Œ€ํ‘œํ•˜๋Š” ์‹ ํฅ ์ฃผ๊ฑฐ์ง€๋กœ ๋– ์˜ค๋ฅธ ์—ญ๋ถ์ง€๊ตฌ์—์„œ ์šฐ๋ฏธ๊ฑด์„ค์ด ์ด๋‹ฌ 1260๊ฐ€๊ตฌ ๊ทœ๋ชจ์˜ โ€˜์šฐ๋ฏธ๋ฆฐ ์„ผํŠธ๋ŸดํŒŒํฌโ€™๋ฅผ ๋ถ„์–‘ํ•œ๋‹ค.โ—‹๋…น์ง€์œจ 40%์˜ 1260๊ฐ€๊ตฌ ๋Œ€๋‹จ์ง€์ฒ˜์ธ๊ตฌ์—์„œ ๊ฐ€์žฅ ๋†’์€ 34์ธต ์•„ํŒŒํŠธ๋กœ 1260๊ฐ€๊ตฌ ๋ชจ๋‘ ์ „์šฉ 59ยท75ยท84ใŽก ์ค‘์†Œํ˜•์œผ๋กœ ๊ตฌ์„ฑ๋๋‹ค. ๋ชจ๋“  ๊ฐ€๊ตฌ๋ฅผ ๋‚จํ–ฅ ์œ„์ฃผ๋กœ ์„ค๊ณ„ํ–ˆ๊ณ  ๊ฑดํ์œจ(๋Œ€์ง€ ๋ฉด์  ๋Œ€๋น„ ๊ฑด๋ฌผ ๋ฐ”๋‹ฅ ๋ฉด์  ๋น„์œจ)์ด 12.8%์— ๋ถˆ๊ณผํ•ด ๋…น์ง€์œจ์ด 40%์— ๋‹ฌํ•œ๋‹ค. ๊ทผ๋ฆฐ๊ณต์› ์–ด๋ฆฐ์ด๊ณต์›๊ณผ ๋งž๋‹ฟ์•„ ์žˆ๊ณ  ํ•จ๋ฐ•์‚ฐ๋„ ๋ผ๊ณ  ์žˆ์–ด โ€˜๋…น์ง€ ํ”„๋ฆฌ๋ฏธ์—„ ๋‹จ์ง€โ€™๋กœ ํ‰๊ฐ€๋ฐ›๋Š”๋‹ค.์—ญ๋ถ์ง€๊ตฌ๋Š” ์šฉ์ธ ์‹œ๋‚ด๋Š” ๋ฌผ๋ก  ์„œ์šธ๋กœ์˜ ์ด๋™์ด ์‰ฝ๋‹ค. ๊ฑธ์–ด์„œ ๊ฐˆ ์ˆ˜ ์žˆ๋Š” ์šฉ์ธ ๊ฒฝ์ „์ฒ  ๋ช…์ง€๋Œ€์—ญ์„ ์ด์šฉํ•ด ๋ถ„๋‹น์„  ๊ธฐํฅ์—ญ์—์„œ ํ™˜์Šนํ•˜๋ฉด ์„œ์šธ ๊ฐ•๋‚จ๊ถŒ๊นŒ์ง€ ์•ฝ 50๋ถ„์ด๋ฉด ๋„์ฐฉํ•  ์ˆ˜ ์žˆ๋‹ค. 2017๋…„ ๊ฐœํ†ต ์˜ˆ์ •์ธ ๊ตญ๋„ 42ํ˜ธ์„  ๋Œ€์ฒด ์šฐํšŒ๋„๋กœ(์ˆ˜์›์‹ ๊ฐˆIC ๋ฐฉ๋ฉด)๋ฅผ ์ด์šฉํ•˜๋ฉด ๊ฒฝ๋ถ€๊ณ ์†๋„๋กœ ๊ธฐํฅIC์™€ ์ˆ˜์›IC๊นŒ์ง€ ๊ฑฐ๋ฆฌ๋„ 12ใŽž ์ •๋„๋กœ ์ค„์–ด๋“ ๋‹ค. ๋‹จ์ง€ ๋ฐ”๋กœ ์˜†์— ์ด๋งˆํŠธ๊ฐ€ ๋ฌธ์„ ์—ด๊ณ  ์ดˆ๋“ฑํ•™๊ต๊ฐ€ ๋“ค์–ด์„ค ์˜ˆ์ •์ด๋‹ค. ์šฉ์‹ ์ค‘ํ•™๊ต์™€ ์šฉ์ธ๊ณ ๋“ฑํ•™๊ต๋„ ๊ฐ€๊น๋‹ค.โ—‹์ค‘์†Œ ํ‰ํ˜• โ€˜ํ˜์‹  ์„ค๊ณ„โ€™ ๋„์ž…์ค‘์†Œํ˜• ํŠนํ™” ์„ค๊ณ„๋„ ๋ˆˆ์— ๋ˆ๋‹ค. ๋ชจ๋“  ๊ฐ€๊ตฌ ์ฃผ๋ฐฉ์„ ์ฃผ๋ถ€์˜ ๋™์„ ์„ ์ตœ์†Œํ™”ํ•˜๋Š” โ€˜ใ„ทโ€™์ž ํ˜•ํƒœ๋กœ ๋ฐฐ์น˜ํ–ˆ๋‹ค. ์ „์šฉ 59ใŽก(Aํƒ€์ž…)์—๋Š” 3๊ฐœ ์นจ์‹ค์— ๋ชจ๋‘ ์ˆ˜๋‚ฉ๊ณต๊ฐ„์„ ์„ค์น˜ํ•œ๋‹ค. ์ „์šฉ 75ใŽก์—๋Š” ํ˜„๊ด€ ์˜†์— ์ž…๊ตฌ๋ฅผ ๋†’์ธ โ€˜์›Œํฌ์ธ ์ˆ˜๋‚ฉ๊ณต๊ฐ„โ€™์„ ์ œ๊ณตํ•œ๋‹ค. ์ „์šฉ 84ใŽก ์ผ๋ถ€ ํƒ€์ž…์—๋Š” ์ฃผ๋ฐฉ ๋Œ€ํ˜• ์ˆ˜๋‚ฉ๊ณต๊ฐ„์ด๋‚˜ ์ž‘์—…๊ณต๊ฐ„์œผ๋กœ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋Œ€ํ˜• ์ฃผ๋ฐฉ์„ ๋“ค์—ฌ ๊ณต๊ฐ„ ํ™œ์šฉ๋„๋ฅผ ๋†’์˜€๋‹ค. ์ผ๋ถ€ ๊ฐ€๊ตฌ์—” ๋ ˆ์ €์šฉํ’ˆ ๋“ฑ์„ ๋ณด๊ด€ํ•  ์ˆ˜ ์žˆ๋Š” ์ง€ํ•˜ ์ฐฝ๊ณ ๋„ ์ œ๊ณตํ•œ๋‹ค. ์•ˆ์ „ํ•˜๊ฒŒ ํƒ๋ฐฐ๋ฅผ ๋ฐœ์†กยท์ˆ˜๋ นํ•  ์ˆ˜ ์žˆ๋Š” ๋ฌด์ธํƒ๋ฐฐ์‹œ์Šคํ…œ๋„ ๊ฐ–์ถœ ๊ณ„ํš์ด๋‹ค.๋ฐฉ๋ฌธํ•œ ์นœ์ธ์ฒ™ ๋“ฑ์ด ๋จธ๋ฌด๋ฅด๊ฑฐ๋‚˜ ๊ธฐ๋…์ผ ํŒŒํ‹ฐ๊ณต๊ฐ„์œผ๋กœ ์ด์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๊ฒŒ์ŠคํŠธํ•˜์šฐ์Šค๋ฅผ ์„ค์น˜ํ•œ๋‹ค. ์ž…์ฃผ์ž ํœด์‹๊ณต๊ฐ„์ธ โ€˜์นดํŽ˜ ๋ฆฐโ€™๋„ ๋งˆ๋ จํ•œ๋‹ค. ์–ด๋ฆฐ ์ž๋…€๋“ค์ด ํ†ตํ•™๋ฒ„์Šค๋ฅผ ์•ˆ์ „ํ•˜๊ฒŒ ๊ธฐ๋‹ค๋ฆด ์ˆ˜ ์žˆ๋„๋ก ์Šค์ฟจ๋ฒ„์Šค์กด์„ ์„ค์น˜ํ•˜๊ณ  ๋‚จ๋…€ ๊ตฌ๋ถ„์ด ์žˆ๋Š” ๋…์„œ์‹ค๋„ ๋ฌธ์„ ์—ฐ๋‹ค. ์‹ค๋‚ด๊ณจํ”„์—ฐ์Šต์žฅ๊ณผ ํ”ผํŠธ๋‹ˆ์Šค์„ผํ„ฐ, ์ƒค์›Œ์‹ค ๋“ฑ ์ปค๋ฎค๋‹ˆํ‹ฐ์‹œ์„ค๋„ ๋งˆ๋ จํ•œ๋‹ค. ๋ชจ๋ธํ•˜์šฐ์Šค๋Š” ์šฉ์ธ์‹œ ์—ญ์‚ผ๋™ ์ฃผ๋ฏผ์„ผํ„ฐ ์˜†์— ๋ฌธ์„ ์—ฐ๋‹ค. ๊น€๋ณดํ˜• ๊ธฐ์ž/๊น€ํ•˜๋‚˜ ํ•œ๊ฒฝ๋‹ท์ปด ๊ธฐ์ž [email protected] - source_sentence: ํ›„์† ๊ณต์ •์—์„œ ์ถ”๊ฐ€ ๋น„์šฉ ๋ฐœ์ƒ์ด ์˜ˆ์ƒ๋˜๋Š” ์„ค๋น„๋ฅผ ์ฃผ๋ฌธํ•œ ๋‚˜๋ผ๋Š”? sentences: - ์‚ผ์„ฑ์ค‘๊ณต์—…์ด ์ง€๋‚œ 1๋ถ„๊ธฐ์— ๋Œ€๊ทœ๋ชจ ์ ์ž๋ฅผ ๋ƒˆ๋‹ค. ํ•ด์–‘ํ”Œ๋žœํŠธ ํ”„๋กœ์ ํŠธ์˜ ์ž ์žฌ์  ์†์‹ค์— ๋Œ€๋น„ํ•ด ๋Œ€๊ทœ๋ชจ ์ถฉ๋‹น๊ธˆ์„ ์Œ“์•˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. โ–ถ๋ณธ์ง€ 4์›”23์ผ์ž A13๋ฉด ์ฐธ์กฐ ์‚ผ์„ฑ์ค‘๊ณต์—…์€ 1๋ถ„๊ธฐ์— ๋งค์ถœ 3์กฐ4311์–ต์›, ์˜์—…์†์‹ค 3625์–ต์›, ๋‹น๊ธฐ์ˆœ์†์‹ค 2724์–ต์›์„ ๊ธฐ๋กํ–ˆ๋‹ค๊ณ  25์ผ ๊ณต์‹œํ–ˆ๋‹ค. ์ž‘๋…„ 1๋ถ„๊ธฐ์— 4402์–ต์›์˜ ์˜์—…์ด์ต๊ณผ 3005์–ต์›์˜ ๋‹น๊ธฐ์ˆœ์ด์ต์„ ๋ƒˆ๋˜ ๊ฒƒ๊ณผ ๋น„๊ตํ•˜๋ฉด ํฐ ํญ์œผ๋กœ ์ ์ž์ „ํ™˜ํ–ˆ๋‹ค. ๋งค์ถœ์€ ์ „๋…„ ๋™๊ธฐ ๋Œ€๋น„ 11.7% ๊ฐ์†Œํ–ˆ์„ ๋ฟ์ธ๋ฐ๋„ ์ด์ต์ด ํฌ๊ฒŒ ์ค„์–ด๋“  ์ด์œ ๋Š” ํ•ด์–‘ํ”Œ๋žœํŠธ ํ”„๋กœ์ ํŠธ ์†์‹ค์— ๋Œ€๋น„ํ•ด ์•ฝ 5000์–ต์›์˜ ์ถฉ๋‹น๊ธˆ์„ ์Œ“์•˜๊ธฐ ๋•Œ๋ฌธ์ด๋ผ๊ณ  ํšŒ์‚ฌ ์ธก์€ ์„ค๋ช…ํ–ˆ๋‹ค. ์•ž์„œ ์ง€๋‚œ 2์›”๋ถ€ํ„ฐ ์‚ผ์„ฑ์ค‘๊ณต์—…์˜ ํ•ด์–‘ํ”Œ๋žœํŠธ ํ”„๋กœ์ ํŠธ์™€ ๊ด€๋ จํ•ด ๊ฒฝ์˜์ง„๋‹จ์„ ์ง„ํ–‰ํ•œ ์‚ผ์„ฑ๊ทธ๋ฃน ์ปจํŠธ๋กคํƒ€์›Œ์ธ ๋ฏธ๋ž˜์ „๋žต์‹ค์€ ๋Œ€๊ทœ๋ชจ ๋ถ€์‹ค์ด ์žˆ๋‹ค๊ณ  ํŒ๋‹จํ•˜๊ณ  ์ถฉ๋‹น๊ธˆ์„ ์Œ“๋„๋ก ํ–ˆ๋‹ค. ์‚ผ์„ฑ์ค‘๊ณต์—… ๊ด€๊ณ„์ž๋Š” โ€œ2012๋…„์— ์ˆ˜์ฃผํ•œ ํ˜ธ์ฃผ ์ธํŽ™์Šคํ”„๋กœ์ ํŠธ์˜ ์ต์‹œ์Šค(Ichthys) ํ•ด์–‘๊ฐ€์Šค์ฒ˜๋ฆฌ์„ค๋น„(CPF)์™€ ์ง€๋‚œํ•ด ์ˆ˜์ฃผํ•œ ๋‚˜์ด์ง€๋ฆฌ์•„ ์—์ง€๋‚˜(Egina) ๋ถ€์œ ์‹ ์ƒ์‚ฐ์ €์žฅํ•˜์—ญ์„ค๋น„(FPSO) ๋“ฑ 2๊ฑด์˜ ํ•ด์–‘ํ”Œ๋žœํŠธ ๊ณต์‚ฌ์—์„œ ์†์‹ค์ด ์˜ˆ์ƒ๋œ๋‹คโ€๊ณ  ๋งํ–ˆ๋‹ค. ๊ทธ๋Š” โ€œ์ธํŽ™์Šคํ”„๋กœ์ ํŠธ์˜ CPF๋Š” ์ƒ์„ธ์„ค๊ณ„ ๋“ฑ ํ›„์† ๊ณต์ •์—์„œ ์‚ฌ์–‘์ด ๋ฐ”๋€Œ๋ฉด์„œ ์ž‘์—… ๋ฌผ๋Ÿ‰๊ณผ ๋น„์šฉ์ด ์ฆ๊ฐ€ํ–ˆ์œผ๋ฉฐ, FPSO๋Š” ๋‚˜์ด์ง€๋ฆฌ์•„ ํ˜„์ง€์—์„œ ์ƒ์‚ฐ ๋น„์šฉ์ด ๋Š˜์–ด๋‚  ๊ฒƒ์œผ๋กœ ๋ณด์ธ๋‹คโ€๊ณ  ๋ง๋ถ™์˜€๋‹ค. ์‚ผ์„ฑ์ค‘๊ณต์—…์€ 2๊ฑด์˜ ํ•ด์–‘ํ”Œ๋žœํŠธ ํ”„๋กœ์ ํŠธ ์™ธ์— ๋‹ค๋ฅธ ํ”„๋กœ์ ํŠธ๋Š” ์ •์ƒ์ ์œผ๋กœ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค๊ณ  ๋ฐํ˜”๋‹ค. ํšŒ์‚ฌ ๊ด€๊ณ„์ž๋Š” โ€œ์˜ˆ์ƒ ์†์‹ค์„ 1๋ถ„๊ธฐ์— ๋ฐ˜์˜ํ•œ ๋งŒํผ 2๋ถ„๊ธฐ๋ถ€ํ„ฐ๋Š” ๊ฒฝ์˜ ์‹ค์ ์ด ์ •์ƒ ์ˆ˜์ค€์œผ๋กœ ํšŒ๋ณตํ•  ๊ฒƒโ€์ด๋ผ๊ณ  ๋‚ด๋‹ค๋ดค๋‹ค.์‚ผ์„ฑ์ค‘๊ณต์—…์€ ์ด๋‚  ์‹ค์ ์ „๋ง ๊ณต์‹œ๋ฅผ ํ†ตํ•ด ์˜ฌํ•ด ๋งค์ถœ์ด 14์กฐ6000์–ต์›, ๋ฒ•์ธ์„ธ ๋น„์šฉ ์ฐจ๊ฐ ์ „ ์ˆœ์ด์ต์ด 2000์–ต์› ์ •๋„์ผ ๊ฒƒ์ด๋ผ๊ณ  ๋ฐํ˜”๋‹ค. - ์ฐจ์ž…๊ธˆ ๊ฐš๊ธฐ๊ฐ€ ๋ฒ…์ฐฌ ํ•œ๊ณ„๊ธฐ์—… ๊ฐ€์šด๋ฐ ๋Œ€๊ธฐ์—…์ด ๋Š˜๋ฉด์„œ ๋ถ€์‹ค์œ„ํ—˜์„ โ€˜๋Œ€ํ˜•ํ™”โ€™ํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒฝ๊ณ ๊ฐ€ ๋‚˜์™”๋‹ค. ๋Œ€๊ธฐ์—… ๋ถ€์‹ค์ด ํ˜„์‹ค๋กœ ๋‹ฅ์น  ๊ฒฝ์šฐ ์ „์ฒด ์ž๊ธˆ์‹œ์žฅ์˜ ๋ถˆ์•ˆ์œผ๋กœ ๋ฒˆ์งˆ ์ˆ˜ ์žˆ๋‹ค๋Š” ์šฐ๋ ค๋‹ค. LG๊ฒฝ์ œ์—ฐ๊ตฌ์›์€ 3์ผ โ€˜๋ถ€์‹ค์œ„ํ—˜ ๊ธฐ์—…์˜ ๋Œ€ํ˜•ํ™”๊ฐ€ ๊ธˆ์œตํšŒ์‚ฌ ๊ฑด์ „์„ฑ์„ ๋–จ์–ด๋œจ๋ฆฌ๊ณ  ์žˆ๋‹คโ€™๋Š” ์ œ๋ชฉ์˜ ๋ณด๊ณ ์„œ์—์„œ ๊ตญ๋‚ด ๊ธˆ์œตํšŒ์‚ฌ์˜ ๋ถ€์‹ค์ž์‚ฐ ๊ทœ๋ชจ๊ฐ€ ์˜ฌ ๋“ค์–ด ์ง€๋‚œ 9์›” ๋ง๊นŒ์ง€ 6์กฐ8000์–ต์› ๋Š˜์–ด๋‚œ 39์กฐ8000์–ต์›์— ๋‹ฌํ–ˆ๋‹ค๋ฉฐ ์ด๊ฐ™์ด ๋ถ„์„ํ–ˆ๋‹ค. ์ดํ•œ๋“ ์—ฐ๊ตฌ์œ„์›์€ โ€œ์˜ฌ ๋“ค์–ด ์ฆ๊ฐ€ํ•œ ๋ถ€์‹ค์ž์‚ฐ์€ ๋Œ€๋ถ€๋ถ„ ์€ํ–‰์—์„œ ๋ฐœ์ƒํ–ˆ๋Š”๋ฐ ๋Œ€๊ธฐ์—… ๋Œ€์ถœ์ด ํŠนํžˆ ๋ฌธ์ œ๊ฐ€ ๋๋‹คโ€๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ์€ํ–‰ ๋ถ€๋ฌธ์˜ ๊ฒฝ์šฐ ๋Œ€๊ธฐ์—…์˜ ๋ถ€์‹ค์ฑ„๊ถŒ ์ฆ๊ฐ€ํญ์€ ์˜ฌ ๋“ค์–ด 9์›”๊นŒ์ง€ 8์กฐ5000์–ต์›์— ๋‹ฌํ•ด ์ง€๋‚œํ•ด ๊ฐ™์€ ๊ธฐ๊ฐ„์˜ 3์กฐ2000์–ต์›์„ ํ›จ์”ฌ ์›ƒ๋Œ์•˜๋‹ค. ๊ฐ™์€ ๊ธฐ๊ฐ„ ์ค‘์†Œ๊ธฐ์—…์˜ ๋ถ€์‹ค์ฑ„๊ถŒ ์ฆ๊ฐ€ํญ์€ 10์กฐ4000์–ต์›์œผ๋กœ ์ „๋…„ ๋™๊ธฐ์™€ ๋™์ผํ–ˆ๋‹ค. ๋ณด๊ณ ์„œ๋Š” ์˜ฌ ๋“ค์–ด ๋Œ€๊ธฐ์—…์˜ ๋ถ€์‹ค ์ •๋„๊ฐ€ ์ปค์ง€๊ณ  ์žˆ๋‹ค๋ฉฐ ์ค‘์†Œ๊ธฐ์—…์€ ๊ธ€๋กœ๋ฒŒ ๊ธˆ์œต์œ„๊ธฐ ๋‹น์‹œ ๊ตฌ์กฐ์กฐ์ •์ด ์ƒ๋‹นํžˆ ์ง„ํ–‰๋œ ๋ฐ˜๋ฉด ๋Œ€๊ธฐ์—…์€ ์ตœ๊ทผ์—์•ผ ๋ถ€์‹ค์ด ํ˜„์‹คํ™”๋˜๊ธฐ ์‹œ์ž‘ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์ด๋ผ๊ณ  ๋ถ„์„ํ–ˆ๋‹ค. ์ด์ž๋ณด์ƒ๋ฐฐ์œจ 1์„ ๋ฐ‘๋Œ์•„ ์˜์—…์ด์ต์œผ๋กœ ์ด์ž๋„ ๊ฐš์ง€ ๋ชปํ•˜๋Š” ํ•œ๊ณ„๊ธฐ์—…์„ ์‚ดํŽด๋ด๋„ ๋Œ€ํ˜•ํ™” ์ถ”์„ธ๊ฐ€ ๋‘๋“œ๋Ÿฌ์กŒ๋‹ค. ์ „์ฒด ์ƒ์žฅ๊ธฐ์—…์˜ ์ฐจ์ž…๊ธˆ ๊ฐ€์šด๋ฐ ํ•œ๊ณ„๊ธฐ์—… ์ฐจ์ž…๊ธˆ์ด ์ฐจ์ง€ํ•˜๋Š” ๋น„์ค‘์€ 2005๋…„ 13.3%์—์„œ ์˜ฌํ•ด ์ƒ๋ฐ˜๊ธฐ 34.0%๋กœ ํ™•๋Œ€๋๋‹ค. ํ•œ๊ณ„๊ธฐ์—…์˜ ํ‰๊ท  ์ฐจ์ž…๊ธˆ์ด ๊ฐ™์€ ๊ธฐ๊ฐ„ 1270์–ต์›์—์„œ 6799์–ต์›์œผ๋กœ 5.4๋ฐฐ ๋›ด ๋ฐ ๋”ฐ๋ฅธ ๊ฒƒ์ด๋‹ค. ํ•œ๊ณ„๊ธฐ์—…์˜ ์ฐจ์ž…๊ธˆ ๊ฐ€์šด๋ฐ ๋Œ€๊ธฐ์—…์ด ์ฐจ์ง€ํ•˜๋Š” ๋น„์ค‘์ด 93.2%์—์„œ 99.1%๊นŒ์ง€ ์น˜์†Ÿ์œผ๋ฉด์„œ ๊ฐœ๋ณ„ ๋ถ€์‹ค์˜ ๋ฉ์น˜ ์ž์ฒด๊ฐ€ ์ปค์กŒ๋‹ค. ์ด ์—ฐ๊ตฌ์œ„์›์€ โ€œ์ƒ์žฅ์‚ฌ ๊ฐ€์šด๋ฐ ํ•œ๊ณ„๊ธฐ์—…์˜ ์ฐจ์ž…๊ธˆ์€ ๋Œ€๋ถ€๋ถ„ ๋Œ€๊ธฐ์—…์ด ๊ฐ–๊ณ  ์žˆ๋Š” ์…ˆโ€์ด๋ผ๋ฉฐ โ€œ1๊ฐœ ๋Œ€๊ธฐ์—…์˜ ๋ถ€์‹ค์€ 25๊ฐœ ์ค‘์†Œ๊ธฐ์—…์˜ ๋ถ€์‹ค๊ณผ ๋น„์Šทํ•  ์ •๋„๋กœ ์‹œ์žฅ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์ด ํฌ๋‹ค๋Š” ๊ฒŒ ๋ฌธ์ œโ€๋ผ๊ณ  ์šฐ๋ คํ–ˆ๋‹ค.๋ณด๊ณ ์„œ๋Š” ์œ„ํ—˜์„ ์ตœ์†Œํ™”ํ•˜๋ ค๋ฉด ์„ ์ œ์ ์ธ ๊ตฌ์กฐ์กฐ์ •์ด ํ•ด๋‹ต์ด๋ผ๋ฉฐ ๋ถ€์‹ค ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๊ธฐ์—…์„ ์„ ๋ณ„ํ•ด ์ถ”๊ฐ€์ ์ธ ์ž๊ธˆ ๊ณต๊ธ‰์„ ์–ต์ œํ•ด์•ผ ๋ถ€์‹ค ํ™•์‚ฐ์„ ๋ง‰์„ ์ˆ˜ ์žˆ๋‹ค๊ณ  ์ง€์ ํ–ˆ๋‹ค. - "1967๋…„ ์ œ3์ฐจ ์ค‘๋™ ์ „์Ÿ์—์„œ ์ด์Šค๋ผ์—˜์˜ ์••๋„์ ์ธ ์Šน๋ฆฌ์— ์ด์–ด ์•„๋ž ๋‹จ์ฒด๋“ค์˜ ๋‹ค์ˆ˜๋Š” ์˜ํ† ๋ฅผ ํšŒ๋ณตํ•˜๊ณ  ๋‹ค๋ฅธ ๋ชฉ์ ๋“ค์„ ์ถ”์ง„ํ•˜๋Š” ๋ฐ ์ „ํ†ต์ ์ธ\ \ ๊ฐ ์ฃผ ๊ฐ„์˜ ๊ต์ „ ์ƒํƒœ๋กœ ์–‘์ž ํƒ์ผ๋“ค์„ ์ฐพ๊ณ  ์žˆ์—ˆ๋‹ค. ์ด์Šค๋ผ์—˜์€ ๋˜ํ’€์ด์ ์œผ๋กœ ํŒ”๋ ˆ์Šคํƒ€์ธ์˜ ํ”ผ๋‹ค์ธ ๊ฒŒ๋ฆด๋ผ๋“ค์— ์˜ํ•˜์—ฌ ๊ตญ๊ฒฝ์„ ๊ฑด๋„ˆ๋Š” ๊ณต๊ฒฉ๋“ค์—\ \ ์˜ํ•˜์—ฌ ํƒ€๊ฒฉ๋˜์—ˆ๋‹ค.\n\n1970๋…„ 9์›” 1์ผ ๊ตญ์™•์„ ์•”์‚ดํ•˜๋Š” ๋ฐ ๋ช‡๋ช‡์˜ ์‹œ๋„๋“ค์ด ์‹คํŒจํ•˜์˜€๋‹ค. 9์›” 6์ผ ํŒ”๋ ˆ์Šคํƒ€์ธ ํ•ด๋ฐฉ๋Œ€์ค‘์ „์„ ์˜ ๋‚ฉ์น˜\ \ ์‚ฌ๊ฑด๋“ค์˜ ์—ฐ์†๋“ค์—์„œ 3๋Œ€์˜ ํ•ญ๊ณต๊ธฐ๊ฐ€ ๊ทธ๋“ค์— ์˜ํ•˜์—ฌ ๋‚ฉ์น˜๋˜์—ˆ๋Š” ๋ฐ ์ž๋ฅด์นด์— ์ƒ๋ฅ™ํ•œ ์Šค์œ„์Šค ํ•ญ๊ณต๊ณผ TWA ํ•ญ๊ณต, ๊ทธ๋ฆฌ๊ณ  ์นด์ด๋กœ์— ์ƒ๋ฅ™ํ•œ ํŒฌ์•„๋ฉ”๋ฆฌ์นธ\ \ ํ•ญ๊ณต์ด์—ˆ๋‹ค. 9์›” 9์ผ ๋‹น์‹œ ๋ฐ”๋ ˆ์ธ์œผ๋กœ๋ถ€ํ„ฐ ์˜๊ตญํ•ด์™ธํ•ญ๊ณต ํ•ญ๊ณต๊ธฐ๋„ ๋˜ํ•œ ์ž๋ฅด์นด๋กœ ๋‚ฉ์น˜๋˜์—ˆ๋‹ค. ์ „๋ถ€์˜ ์ธ์งˆ๋“ค์ด ์˜ฎ๊ฒจ์ง„ ํ›„, ํ•ญ๊ณต๊ธฐ๋“ค์€ ์ง€์‹œ์ ์œผ๋กœ\ \ ํ…”๋ ˆ๋น„์ „ ์นด๋ฉ”๋ผ๋“ค ์•ž์— ํญ๋ฐœ๋˜์—ˆ๋‹ค. ๊ตญ์™•์„ ์ง์ ‘ ๋งž์„œ ํ™”๋‚˜๊ฒŒ ํ•œ ๋ฐ˜๋ž€์ž๋“ค์€ ์ด๋ฅด๋น„๋“œ ์ง€์—ญ์„ \"ํ•ด๋ฐฉ๋œ ์ง€๋ฐฉ\"์œผ๋กœ ์„ ์–ธํ•˜์˜€๋‹ค.\n\n9์›”\ \ 16์ผ ํ›„์„ธ์ธ ๊ตญ์™•์€ ๊ณ„์—„๋ น์„ ์„ ํฌํ•˜์˜€๋‹ค. ์ด์–ด์ง„ ๋‚  ์š”๋ฅด๋‹จ์˜ ํƒฑํฌ๋“ค์€ ์•”๋งŒ์— ์žˆ๋Š” ํŒ”๋ ˆ์Šคํƒ€์ธ์˜ ๊ธฐ๊ตฌ๋“ค์˜ ๋ณธ๋ถ€๋“ค์„ ๊ณต๊ฒฉํ•˜์˜€๊ณ , ์œก๊ตฐ์€ ๋˜ํ•œ\ \ ์ž๋ฅด์นด, ์ด๋ฅด๋น„๋“œ, ์‚ดํŠธ์™€ ์Šค์›จ์ผ๋ ˆ์— ์žˆ๋Š” ์ง„์˜๋“ค์„ ๊ณต๊ฒฉํ•˜๊ธฐ๋„ ํ•˜์˜€๋‹ค.\n\n1970๋…„ 9์›”์€ ๊ฒ€์€ 9์›”๋กœ ์•Œ๋ ค์กŒ์œผ๋ฉฐ ์–ด์ฉŒ๋‹ค \"ํ›„์™ธ์ ์ธ\ \ ์‚ฌ๊ฑด๋“ค์˜ ์‹œ๊ธฐ\"๋กœ์„œ ์–ธ๊ธ‰๋˜์—ˆ๋‹ค. ๊ทธ ์ผ์€ 34์„ธ์˜ ๊ตฐ์ฃผ๊ฐ€ ์„ฑ๊ณต์ ์œผ๋กœ ์ž์‹ ์˜ ์™•์ •์„ ํƒ€๋„ํ•˜๋Š” ์‹œ๋„๋“ค์„ ์ง„์••ํ•œ ํ•œํ•ด์˜€๋‹ค. ํญ๋ ฅ์€ ์–‘์ชฝ์œผ๋กœ๋ถ€ํ„ฐ\ \ 7์ฒœ์—์„œ 8์ฒœ์˜ ์‚ฌ๋ง์— ๊ฒฐ๊ณผ๋ฅผ ๊ฐ€์ ธ์™”๋‹ค. ๋ฌด์žฅํ•œ ๋ถ„์Ÿ์€ ํŒ”๋ ˆ์Šคํƒ€์ธ ํ•ด๋ฐฉ ๊ธฐ๊ตฌ์™€ ์ˆ˜์ฒœ๋ช…์˜ ํŒ”๋ ˆ์Šคํƒ€์ธ์ธ๋“ค์„ ๋ ˆ๋ฐ”๋…ผ์œผ๋กœ ๋ฐฐ์ œ์™€ ํ•จ๊ป˜ 1971๋…„\ \ 7์›”๊นŒ์ง€ ์ง€์†๋˜์—ˆ๋‹ค. \n\n๊ฒฐ๊ณผ๋กœ์„œ ํ›„์„ธ์ธ์ด ์กฐ๊ตญ์—์„œ ์ธ๊ธฐ๋ฅผ ์œ ์ง€ํ•˜์˜€์–ด๋„ ์•„๋ž ์„ธ๊ณ„๋Š” 10๋…„๊ฐ„ ์„ธ์›”์˜ ๋‚˜๋จธ์ง€๋ฅผ ํ†ตํ•˜์—ฌ ๊ทธ๋ฅผ ํฌ๊ฒŒ ๊ณ ๋ฆฝ์‹œ์ผฐ๋‹ค.\ \ 1974๋…„ ์•„๋ž ์ง€๋„์ž๋“ค์€ ํŒ”๋ ˆ์Šคํƒ€์ธ ํ•ด๋ฐฉ ๊ธฐ๊ตฌ๋ฅผ \"ํŒ”๋ ˆ์Šคํƒ€์ธ ๊ตญ๋ฏผ์˜ ๋‹จ ํ•˜๋‚˜์˜ ํ•ฉ๋ฒ•์ ์ธ ๋Œ€ํ‘œ\"๋กœ ์„ ์–ธํ•˜์—ฌ ์š”๋ฅด๋‹จ๊ฐ• ์„œ์•ˆ ์ง€๊ตฌ์˜ ํŒ”๋ ˆ์Šคํƒ€์ธ์ธ๋“ค์„\ \ ์œ„ํ•œ ์—ฐ์„ค์ž๋กœ์„œ ํ›„์„ธ์ธ์˜ ์—ญํ• ์„ ๊ฐ€์ ธ๊ฐ”๋‹ค.\n\n์ง€๋ฏธ ์นดํ„ฐ ๋ฏธ๊ตญ ๋Œ€ํ†ต๋ น, ์•ˆ์™€๋ฅด ์‚ฌ๋‹คํŠธ ์ด์ง‘ํŠธ ๋Œ€ํ†ต๋ น๊ณผ ๋ฉ”๋‚˜ํ—ด ๋ฒ ๊ธด ์ด์Šค๋ผ์—˜ ์ด๋ฆฌ ์‚ฌ์ด์˜\ \ 1978๋…„ ์บ ํ”„๋ฐ์ด๋น„๋“œ ํ˜‘์ •์€ ์š”๋ฅด๋‹จ์˜ ํ›„์„ธ์ธ ๊ตญ์™•์„ ๋“ค์–ด์˜ค์ง€ ๋ชปํ•˜๊ฒŒ ํ•˜์˜€๋‹ค. ์ด์–ด์ง„ ํ•ด ํ›„์„ธ์ธ ๊ตญ์™•์€ ์œ ์—” ์ดํšŒ ์—ฐ์„ค์—์„œ ํ˜‘์ •์„ ๋น„๋‚œํ•˜์˜€๋‹ค.\ \ ์ด ์ž…์žฅ์€ ๊ทธ์™€ ์กฐ๊ตญ์ด ํ•„์š”ํ•˜๋˜ ๋‹ค๋ฅธ ์•„๋ž ์ง€๋„์ž๋“ค๊ณผ ์šฐํ˜ธ๋ฅผ ์žฌ์„ค๋ฆฝํ•˜๋Š” ๋„์›€์„ ์ฃผ์—ˆ๋‹ค. \n\n ํ›„์„ธ์ธ์€ ํŒ”๋ ˆ์Šคํƒ€์ธ ํ•ด๋ฐฉ ๊ธฐ๊ตฌ์˜ ์ง€๋„์ž\ \ ์•ผ์„ธ๋ฅด ์•„๋ผํŒŒํŠธ์™€ ํ™”ํ•ด์—์„œ ์ „ํ˜€ ์„ฑ๊ณต์ ์ด์ง€ ์•Š์•˜๊ณ , ๊ฒฐ๊ตญ 1988๋…„ ์š”๋ฅด๋‹จ๊ฐ• ์„œ์•ˆ ์ง€๊ตฌ์˜ ํ–‰์ •์ ๊ณผ ๋ฒ•์ ์˜ ํ†ต์น˜๋กœ ์š”๋ฅด๋‹จ์˜ ์ฃผ์žฅ์„ ํฌ๊ธฐํ•˜์˜€๋‹ค." pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine model-index: - name: SentenceTransformer based on byKim93/klue-roberta-base-klue-sts-2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: Unknown type: unknown metrics: - type: pearson_cosine value: 0.8517344970710515 name: Pearson Cosine - type: spearman_cosine value: 0.8454245670475068 name: Spearman Cosine --- # SentenceTransformer based on byKim93/klue-roberta-base-klue-sts-2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [byKim93/klue-roberta-base-klue-sts-2](https://huggingface.co/byKim93/klue-roberta-base-klue-sts-2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [byKim93/klue-roberta-base-klue-sts-2](https://huggingface.co/byKim93/klue-roberta-base-klue-sts-2) <!-- at revision c7b29abd6e3ab6122a07dcb926dc11d4e38cb572 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the ๐Ÿค— Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'ํ›„์† ๊ณต์ •์—์„œ ์ถ”๊ฐ€ ๋น„์šฉ ๋ฐœ์ƒ์ด ์˜ˆ์ƒ๋˜๋Š” ์„ค๋น„๋ฅผ ์ฃผ๋ฌธํ•œ ๋‚˜๋ผ๋Š”?', '์‚ผ์„ฑ์ค‘๊ณต์—…์ด ์ง€๋‚œ 1๋ถ„๊ธฐ์— ๋Œ€๊ทœ๋ชจ ์ ์ž๋ฅผ ๋ƒˆ๋‹ค. ํ•ด์–‘ํ”Œ๋žœํŠธ ํ”„๋กœ์ ํŠธ์˜ ์ž ์žฌ์  ์†์‹ค์— ๋Œ€๋น„ํ•ด ๋Œ€๊ทœ๋ชจ ์ถฉ๋‹น๊ธˆ์„ ์Œ“์•˜๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. โ–ถ๋ณธ์ง€ 4์›”23์ผ์ž A13๋ฉด ์ฐธ์กฐ ์‚ผ์„ฑ์ค‘๊ณต์—…์€ 1๋ถ„๊ธฐ์— ๋งค์ถœ 3์กฐ4311์–ต์›, ์˜์—…์†์‹ค 3625์–ต์›, ๋‹น๊ธฐ์ˆœ์†์‹ค 2724์–ต์›์„ ๊ธฐ๋กํ–ˆ๋‹ค๊ณ  25์ผ ๊ณต์‹œํ–ˆ๋‹ค. ์ž‘๋…„ 1๋ถ„๊ธฐ์— 4402์–ต์›์˜ ์˜์—…์ด์ต๊ณผ 3005์–ต์›์˜ ๋‹น๊ธฐ์ˆœ์ด์ต์„ ๋ƒˆ๋˜ ๊ฒƒ๊ณผ ๋น„๊ตํ•˜๋ฉด ํฐ ํญ์œผ๋กœ ์ ์ž์ „ํ™˜ํ–ˆ๋‹ค. ๋งค์ถœ์€ ์ „๋…„ ๋™๊ธฐ ๋Œ€๋น„ 11.7% ๊ฐ์†Œํ–ˆ์„ ๋ฟ์ธ๋ฐ๋„ ์ด์ต์ด ํฌ๊ฒŒ ์ค„์–ด๋“  ์ด์œ ๋Š” ํ•ด์–‘ํ”Œ๋žœํŠธ ํ”„๋กœ์ ํŠธ ์†์‹ค์— ๋Œ€๋น„ํ•ด ์•ฝ 5000์–ต์›์˜ ์ถฉ๋‹น๊ธˆ์„ ์Œ“์•˜๊ธฐ ๋•Œ๋ฌธ์ด๋ผ๊ณ  ํšŒ์‚ฌ ์ธก์€ ์„ค๋ช…ํ–ˆ๋‹ค. ์•ž์„œ ์ง€๋‚œ 2์›”๋ถ€ํ„ฐ ์‚ผ์„ฑ์ค‘๊ณต์—…์˜ ํ•ด์–‘ํ”Œ๋žœํŠธ ํ”„๋กœ์ ํŠธ์™€ ๊ด€๋ จํ•ด ๊ฒฝ์˜์ง„๋‹จ์„ ์ง„ํ–‰ํ•œ ์‚ผ์„ฑ๊ทธ๋ฃน ์ปจํŠธ๋กคํƒ€์›Œ์ธ ๋ฏธ๋ž˜์ „๋žต์‹ค์€ ๋Œ€๊ทœ๋ชจ ๋ถ€์‹ค์ด ์žˆ๋‹ค๊ณ  ํŒ๋‹จํ•˜๊ณ  ์ถฉ๋‹น๊ธˆ์„ ์Œ“๋„๋ก ํ–ˆ๋‹ค. ์‚ผ์„ฑ์ค‘๊ณต์—… ๊ด€๊ณ„์ž๋Š” โ€œ2012๋…„์— ์ˆ˜์ฃผํ•œ ํ˜ธ์ฃผ ์ธํŽ™์Šคํ”„๋กœ์ ํŠธ์˜ ์ต์‹œ์Šค(Ichthys) ํ•ด์–‘๊ฐ€์Šค์ฒ˜๋ฆฌ์„ค๋น„(CPF)์™€ ์ง€๋‚œํ•ด ์ˆ˜์ฃผํ•œ ๋‚˜์ด์ง€๋ฆฌ์•„ ์—์ง€๋‚˜(Egina) ๋ถ€์œ ์‹ ์ƒ์‚ฐ์ €์žฅํ•˜์—ญ์„ค๋น„(FPSO) ๋“ฑ 2๊ฑด์˜ ํ•ด์–‘ํ”Œ๋žœํŠธ ๊ณต์‚ฌ์—์„œ ์†์‹ค์ด ์˜ˆ์ƒ๋œ๋‹คโ€๊ณ  ๋งํ–ˆ๋‹ค. ๊ทธ๋Š” โ€œ์ธํŽ™์Šคํ”„๋กœ์ ํŠธ์˜ CPF๋Š” ์ƒ์„ธ์„ค๊ณ„ ๋“ฑ ํ›„์† ๊ณต์ •์—์„œ ์‚ฌ์–‘์ด ๋ฐ”๋€Œ๋ฉด์„œ ์ž‘์—… ๋ฌผ๋Ÿ‰๊ณผ ๋น„์šฉ์ด ์ฆ๊ฐ€ํ–ˆ์œผ๋ฉฐ, FPSO๋Š” ๋‚˜์ด์ง€๋ฆฌ์•„ ํ˜„์ง€์—์„œ ์ƒ์‚ฐ ๋น„์šฉ์ด ๋Š˜์–ด๋‚  ๊ฒƒ์œผ๋กœ ๋ณด์ธ๋‹คโ€๊ณ  ๋ง๋ถ™์˜€๋‹ค. ์‚ผ์„ฑ์ค‘๊ณต์—…์€ 2๊ฑด์˜ ํ•ด์–‘ํ”Œ๋žœํŠธ ํ”„๋กœ์ ํŠธ ์™ธ์— ๋‹ค๋ฅธ ํ”„๋กœ์ ํŠธ๋Š” ์ •์ƒ์ ์œผ๋กœ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค๊ณ  ๋ฐํ˜”๋‹ค. ํšŒ์‚ฌ ๊ด€๊ณ„์ž๋Š” โ€œ์˜ˆ์ƒ ์†์‹ค์„ 1๋ถ„๊ธฐ์— ๋ฐ˜์˜ํ•œ ๋งŒํผ 2๋ถ„๊ธฐ๋ถ€ํ„ฐ๋Š” ๊ฒฝ์˜ ์‹ค์ ์ด ์ •์ƒ ์ˆ˜์ค€์œผ๋กœ ํšŒ๋ณตํ•  ๊ฒƒโ€์ด๋ผ๊ณ  ๋‚ด๋‹ค๋ดค๋‹ค.์‚ผ์„ฑ์ค‘๊ณต์—…์€ ์ด๋‚  ์‹ค์ ์ „๋ง ๊ณต์‹œ๋ฅผ ํ†ตํ•ด ์˜ฌํ•ด ๋งค์ถœ์ด 14์กฐ6000์–ต์›, ๋ฒ•์ธ์„ธ ๋น„์šฉ ์ฐจ๊ฐ ์ „ ์ˆœ์ด์ต์ด 2000์–ต์› ์ •๋„์ผ ๊ฒƒ์ด๋ผ๊ณ  ๋ฐํ˜”๋‹ค.', '์ฐจ์ž…๊ธˆ ๊ฐš๊ธฐ๊ฐ€ ๋ฒ…์ฐฌ ํ•œ๊ณ„๊ธฐ์—… ๊ฐ€์šด๋ฐ ๋Œ€๊ธฐ์—…์ด ๋Š˜๋ฉด์„œ ๋ถ€์‹ค์œ„ํ—˜์„ โ€˜๋Œ€ํ˜•ํ™”โ€™ํ•˜๊ณ  ์žˆ๋‹ค๋Š” ๊ฒฝ๊ณ ๊ฐ€ ๋‚˜์™”๋‹ค. ๋Œ€๊ธฐ์—… ๋ถ€์‹ค์ด ํ˜„์‹ค๋กœ ๋‹ฅ์น  ๊ฒฝ์šฐ ์ „์ฒด ์ž๊ธˆ์‹œ์žฅ์˜ ๋ถˆ์•ˆ์œผ๋กœ ๋ฒˆ์งˆ ์ˆ˜ ์žˆ๋‹ค๋Š” ์šฐ๋ ค๋‹ค. LG๊ฒฝ์ œ์—ฐ๊ตฌ์›์€ 3์ผ โ€˜๋ถ€์‹ค์œ„ํ—˜ ๊ธฐ์—…์˜ ๋Œ€ํ˜•ํ™”๊ฐ€ ๊ธˆ์œตํšŒ์‚ฌ ๊ฑด์ „์„ฑ์„ ๋–จ์–ด๋œจ๋ฆฌ๊ณ  ์žˆ๋‹คโ€™๋Š” ์ œ๋ชฉ์˜ ๋ณด๊ณ ์„œ์—์„œ ๊ตญ๋‚ด ๊ธˆ์œตํšŒ์‚ฌ์˜ ๋ถ€์‹ค์ž์‚ฐ ๊ทœ๋ชจ๊ฐ€ ์˜ฌ ๋“ค์–ด ์ง€๋‚œ 9์›” ๋ง๊นŒ์ง€ 6์กฐ8000์–ต์› ๋Š˜์–ด๋‚œ 39์กฐ8000์–ต์›์— ๋‹ฌํ–ˆ๋‹ค๋ฉฐ ์ด๊ฐ™์ด ๋ถ„์„ํ–ˆ๋‹ค. ์ดํ•œ๋“ ์—ฐ๊ตฌ์œ„์›์€ โ€œ์˜ฌ ๋“ค์–ด ์ฆ๊ฐ€ํ•œ ๋ถ€์‹ค์ž์‚ฐ์€ ๋Œ€๋ถ€๋ถ„ ์€ํ–‰์—์„œ ๋ฐœ์ƒํ–ˆ๋Š”๋ฐ ๋Œ€๊ธฐ์—… ๋Œ€์ถœ์ด ํŠนํžˆ ๋ฌธ์ œ๊ฐ€ ๋๋‹คโ€๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ์€ํ–‰ ๋ถ€๋ฌธ์˜ ๊ฒฝ์šฐ ๋Œ€๊ธฐ์—…์˜ ๋ถ€์‹ค์ฑ„๊ถŒ ์ฆ๊ฐ€ํญ์€ ์˜ฌ ๋“ค์–ด 9์›”๊นŒ์ง€ 8์กฐ5000์–ต์›์— ๋‹ฌํ•ด ์ง€๋‚œํ•ด ๊ฐ™์€ ๊ธฐ๊ฐ„์˜ 3์กฐ2000์–ต์›์„ ํ›จ์”ฌ ์›ƒ๋Œ์•˜๋‹ค. ๊ฐ™์€ ๊ธฐ๊ฐ„ ์ค‘์†Œ๊ธฐ์—…์˜ ๋ถ€์‹ค์ฑ„๊ถŒ ์ฆ๊ฐ€ํญ์€ 10์กฐ4000์–ต์›์œผ๋กœ ์ „๋…„ ๋™๊ธฐ์™€ ๋™์ผํ–ˆ๋‹ค. ๋ณด๊ณ ์„œ๋Š” ์˜ฌ ๋“ค์–ด ๋Œ€๊ธฐ์—…์˜ ๋ถ€์‹ค ์ •๋„๊ฐ€ ์ปค์ง€๊ณ  ์žˆ๋‹ค๋ฉฐ ์ค‘์†Œ๊ธฐ์—…์€ ๊ธ€๋กœ๋ฒŒ ๊ธˆ์œต์œ„๊ธฐ ๋‹น์‹œ ๊ตฌ์กฐ์กฐ์ •์ด ์ƒ๋‹นํžˆ ์ง„ํ–‰๋œ ๋ฐ˜๋ฉด ๋Œ€๊ธฐ์—…์€ ์ตœ๊ทผ์—์•ผ ๋ถ€์‹ค์ด ํ˜„์‹คํ™”๋˜๊ธฐ ์‹œ์ž‘ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์ด๋ผ๊ณ  ๋ถ„์„ํ–ˆ๋‹ค. ์ด์ž๋ณด์ƒ๋ฐฐ์œจ 1์„ ๋ฐ‘๋Œ์•„ ์˜์—…์ด์ต์œผ๋กœ ์ด์ž๋„ ๊ฐš์ง€ ๋ชปํ•˜๋Š” ํ•œ๊ณ„๊ธฐ์—…์„ ์‚ดํŽด๋ด๋„ ๋Œ€ํ˜•ํ™” ์ถ”์„ธ๊ฐ€ ๋‘๋“œ๋Ÿฌ์กŒ๋‹ค. ์ „์ฒด ์ƒ์žฅ๊ธฐ์—…์˜ ์ฐจ์ž…๊ธˆ ๊ฐ€์šด๋ฐ ํ•œ๊ณ„๊ธฐ์—… ์ฐจ์ž…๊ธˆ์ด ์ฐจ์ง€ํ•˜๋Š” ๋น„์ค‘์€ 2005๋…„ 13.3%์—์„œ ์˜ฌํ•ด ์ƒ๋ฐ˜๊ธฐ 34.0%๋กœ ํ™•๋Œ€๋๋‹ค. ํ•œ๊ณ„๊ธฐ์—…์˜ ํ‰๊ท  ์ฐจ์ž…๊ธˆ์ด ๊ฐ™์€ ๊ธฐ๊ฐ„ 1270์–ต์›์—์„œ 6799์–ต์›์œผ๋กœ 5.4๋ฐฐ ๋›ด ๋ฐ ๋”ฐ๋ฅธ ๊ฒƒ์ด๋‹ค. ํ•œ๊ณ„๊ธฐ์—…์˜ ์ฐจ์ž…๊ธˆ ๊ฐ€์šด๋ฐ ๋Œ€๊ธฐ์—…์ด ์ฐจ์ง€ํ•˜๋Š” ๋น„์ค‘์ด 93.2%์—์„œ 99.1%๊นŒ์ง€ ์น˜์†Ÿ์œผ๋ฉด์„œ ๊ฐœ๋ณ„ ๋ถ€์‹ค์˜ ๋ฉ์น˜ ์ž์ฒด๊ฐ€ ์ปค์กŒ๋‹ค. ์ด ์—ฐ๊ตฌ์œ„์›์€ โ€œ์ƒ์žฅ์‚ฌ ๊ฐ€์šด๋ฐ ํ•œ๊ณ„๊ธฐ์—…์˜ ์ฐจ์ž…๊ธˆ์€ ๋Œ€๋ถ€๋ถ„ ๋Œ€๊ธฐ์—…์ด ๊ฐ–๊ณ  ์žˆ๋Š” ์…ˆโ€์ด๋ผ๋ฉฐ โ€œ1๊ฐœ ๋Œ€๊ธฐ์—…์˜ ๋ถ€์‹ค์€ 25๊ฐœ ์ค‘์†Œ๊ธฐ์—…์˜ ๋ถ€์‹ค๊ณผ ๋น„์Šทํ•  ์ •๋„๋กœ ์‹œ์žฅ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์ด ํฌ๋‹ค๋Š” ๊ฒŒ ๋ฌธ์ œโ€๋ผ๊ณ  ์šฐ๋ คํ–ˆ๋‹ค.๋ณด๊ณ ์„œ๋Š” ์œ„ํ—˜์„ ์ตœ์†Œํ™”ํ•˜๋ ค๋ฉด ์„ ์ œ์ ์ธ ๊ตฌ์กฐ์กฐ์ •์ด ํ•ด๋‹ต์ด๋ผ๋ฉฐ ๋ถ€์‹ค ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€ ๊ธฐ์—…์„ ์„ ๋ณ„ํ•ด ์ถ”๊ฐ€์ ์ธ ์ž๊ธˆ ๊ณต๊ธ‰์„ ์–ต์ œํ•ด์•ผ ๋ถ€์‹ค ํ™•์‚ฐ์„ ๋ง‰์„ ์ˆ˜ ์žˆ๋‹ค๊ณ  ์ง€์ ํ–ˆ๋‹ค.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.8517 | | **spearman_cosine** | **0.8454** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 17,552 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 8 tokens</li><li>mean: 17.68 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 229 tokens</li><li>mean: 438.65 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:----------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>2012๋…„์— ๋ถ€์ผ์žฅํ•™ํšŒ์˜ ์ฃผ์‹๋ฐ˜ํ™˜์— ๋Œ€ํ•ด ๊ธฐ๊ฐ ๊ฒฐ์ •์„ ๋‚ด๋ฆฐ ์žฌํŒ๋ถ€๋Š”?</code> | <code>์ง„์‹ค๊ทœ๋ช… ๊ฒฐ์ •์„ ๋ฐ›์€ ๊น€์ง€ํƒœ์˜ ์œ ๊ฐ€์กฑ๋“ค์€ 2010๋…„ 6์›”์—์•ผ ๋ฒ•์›์— ์ •์ˆ˜์žฅํ•™ํšŒ์™€ ๊ตญ๊ฐ€๋ฅผ ์ƒ๋Œ€๋กœ ๋‚ธ ์ฃผ์‹์–‘๋„ ๋“ฑ ์ฒญ๊ตฌ์†Œ์†ก์„ ๋ƒˆ๋‹ค. ๊น€์”จ ์ธก์€ "๋ฐ• ์ „ ๋Œ€ํ†ต๋ น์ด ์‚ฌ๋งํ•˜๊ณ  ๋‚œ ์ดํ›„ 1980๋…„์— ํ† ์ง€ ๋ฐ˜ํ™˜์ฒญ๊ตฌ ์˜์‚ฌ๋ฅผ ํ‘œ์‹œํ–ˆ๊ณ , ๊ณผ๊ฑฐ์‚ฌ์ •๋ฆฌ์œ„์›ํšŒ์˜ ์ง„์‹ค๊ทœ๋ช… ๊ฒฐ์ •์„ ์†ก๋‹ฌ๋ฐ›์€ ์ดํ›„ ์†ํ•ด๋ฐฐ์ƒ์„ ์ฒญ๊ตฌํ•œ ๊ฒƒ์ด๋ฏ€๋กœ ๊ณต์†Œ์‹œํšจ๊ฐ€ ๋‚จ์•„์žˆ๋‹ค"๊ณ  ์ฃผ์žฅํ–ˆ๋‹ค.<br><br>ํ•˜์ง€๋งŒ 1์‹ฌ ์žฌํŒ๋ถ€๋Š” "์†Œ๋ฉธ์‹œํšจ๊ฐ€ ์ง€๋‚ฌ๋‹ค"๋ฉฐ ๊น€์”จ ์ธก์˜ ์ฒญ๊ตฌ๋ฅผ ๊ธฐ๊ฐํ–ˆ๊ณ , 2์‹ฌ ์žฌํŒ๋ถ€๋„ ๊น€์”จ๊ฐ€ ๊ตญ๊ฐ€์˜ ๊ฐ•๋ฐ•ํ–‰์œ„๋กœ ์ธํ•ด ์žฌ์‚ฐ์„ ํ—Œ๋‚ฉํ•œ ๊ฒƒ์€ ์ธ์ •ํ•˜๋ฉด์„œ๋„ ์˜์‚ฌ๊ฒฐ์ •๊ถŒ์ด ์™„์ „ํžˆ ๋ฐ•ํƒˆ๋‹นํ•œ ์ƒํƒœ๋Š” ์•„๋‹ˆ์—ˆ๋˜ ๊ฒƒ์œผ๋กœ ํŒ๋‹จํ•ด ์›๊ณ  ํŒจ์†Œ ํŒ๊ฒฐํ–ˆ๋‹ค. 2012๋…„ 2์›” 24์ผ ์„œ์šธ์ค‘์•™์ง€๋ฒ• ๋ฏผ์‚ฌํ•ฉ์˜17๋ถ€(์žฌํŒ์žฅ ์—ผ์›์„ญ)์— ์˜ํ•ด 5.16์žฅํ•™ํšŒ์˜ โ€˜ํ—Œ๋‚ฉโ€™ ๊ณผ์ •์—์„œ ๊ฐ•์••์ด ์žˆ์—ˆ์Œ์ด ๋‹ค์‹œ ํ•œ ๋ฒˆ ์ž…์ฆ๋˜์—ˆ๋‹ค. ํ•˜์ง€๋งŒ ์žฌํŒ๋ถ€๋Š” ๊น€์˜์šฐ๊ฐ€ ์ œ๊ธฐํ•œ ๊ณผ๊ฑฐ ๋ถ€์ผ์žฅํ•™ํšŒ์˜ ์ฃผ์‹๋ฐ˜ํ™˜์— ๋Œ€ํ•ด์„œ๋Š” ๊ณต์†Œ์‹œํšจ ์†Œ๋ฉธ์„ ์ด์œ ๋กœ ๊ธฐ๊ฐํ•˜์˜€๋‹ค. ์ด์— ๊ตญ๊ฐ€์˜ ๋ฒ”์ฃ„์— ๋Œ€ํ•ด์„œ๋Š” ๊ณต์†Œ ์‹œํšจ์˜ ๋ฒ”์œ„๋ฅผ ํญ๋„“๊ฒŒ ์ธ์ •ํ•ด์ค˜์•ผ ํ•œ๋‹ค๋Š” ๋น„ํŒ๋„ ์ œ๊ธฐ๋˜์—ˆ๋‹ค. <br><br>๋Œ€๋ฒ•์›์€ 2014๋…„ 2์›” 13์ผ ๊น€์ง€ํƒœ์”จ ์žฅ๋‚จ ์˜๊ตฌ ์”จ๋ฅผ ๋น„๋กฏํ•œ ์œ ๊ฐ€์กฑ 6๋ช…์ด ์ •์ˆ˜์žฅํ•™ํšŒ์™€ ๊ตญ๊ฐ€๋ฅผ ์ƒ๋Œ€๋กœ ๋‚ธ ์ฃผ์‹์–‘๋„ ๋“ฑ ์ฒญ๊ตฌ์†Œ์†ก ์ƒ๊ณ ์‹ฌ์—์„œ ์‹ฌ๋ฆฌ๋ถˆ์†ํ–‰ ๊ธฐ๊ฐ ๊ฒฐ์ •์„ ๋‚ด๋ ธ๋‹ค. '์‹ฌ๋ฆฌ๋ถˆ์†ํ–‰'์€ ์ƒ๊ณ  ์‚ฌ๊ฑด ๊ฐ€์šด๋ฐ ์ƒ๊ณ  ๋Œ€์ƒ์ด ์•„๋‹ˆ๋ผ๊ณ  ํŒ๋‹จ๋˜๋Š” ์‚ฌ๊ฑด์€ ๋”์ด์ƒ ์‹ฌ๋ฆฌํ•˜์ง€ ์•Š๊ณ  ๊ธฐ๊ฐํ•˜๋Š” ์ œ๋„๋‹ค.</code> | | <code>ํˆฌ์ž์˜ ๊ท€์žฌ'๋ผ ๋ถˆ๋ฆฌ๋Š” ์‚ฌ๋žŒ์ด ์˜ฌํ•ด ๋ฒˆ ๋ˆ์€ ์–ผ๋งˆ์ธ๊ฐ€?</code> | <code>์˜ฌํ•ด ์ „ ์„ธ๊ณ„์—์„œ ๋ˆ„๊ฐ€ ๊ฐ€์žฅ ๋งŽ์€ ๋ˆ์„ ๋ฒŒ์—ˆ์„๊นŒ.๋ฏธ๊ตญ ๊ฒฝ์ œ๋งค์ฒด ๋งˆ์ผ“์›Œ์น˜๋Š” โ€˜ํˆฌ์ž์˜ ๊ท€์žฌโ€™ ์›Œ๋Ÿฐ ๋ฒ„ํ• ๋ฒ…์…”ํ•ด์„œ์›จ์ด ํšŒ์žฅ์ด ์˜ฌํ•ด ์„ธ๊ณ„์—์„œ ๊ฐ€์žฅ ๋งŽ์€ ๋ˆ์„ ๋ฒŒ์—ˆ๋‹ค๊ณ  18์ผ(ํ˜„์ง€์‹œ๊ฐ„) ๋ณด๋„ํ–ˆ๋‹ค. ์Šค์œ„์Šค ์ž์‚ฐ์ •๋ณด์—…์ฒด ์›ฐ์Šค์—‘์Šค(Wealth-X)์™€ UBS ์€ํ–‰์˜ ์กฐ์‚ฌ ๊ฒฐ๊ณผ ์˜ฌ์ดˆ 464์–ต๋‹ฌ๋Ÿฌ์˜€๋˜ ๋ฒ„ํ•์˜ ์ž์‚ฐ์ด 127์–ต๋‹ฌ๋Ÿฌ(์•ฝ 13์กฐ4500์–ต์›) ๋Š˜์–ด ์ง€๋‚œ 11์ผ ๊ธฐ์ค€ 591์–ต๋‹ฌ๋Ÿฌ๊ฐ€ ๋๋‹ค. ํ•˜๋ฃจ์— 3700๋งŒ๋‹ฌ๋Ÿฌ(์•ฝ 392์–ต์›)๋ฅผ ๋ฒŒ์–ด๋“ค์ธ ๊ฒƒ์ด๋‹ค. ๋นŒ ๊ฒŒ์ด์ธ  ๋งˆ์ดํฌ๋กœ์†Œํ”„ํŠธ ํšŒ์žฅ์€ 726์–ต๋‹ฌ๋Ÿฌ์˜ ์ž์‚ฐ์„ ๋ณด์œ ํ•ด 1์œ„ ๋ถ€์ž ์ž๋ฆฌ๋ฅผ ์ง€์ผฐ์ง€๋งŒ, ์˜ฌํ•ด ๋ฒ„ํ•๋ณด๋‹ค ์ ์€ 115์–ต๋‹ฌ๋Ÿฌ๋ฅผ ๋ฒŒ์–ด โ€˜์˜ฌํ•ด ๋ˆ ๋งŽ์ด ๋ฒˆ ์‚ฌ๋žŒ ์ˆœ์œ„โ€™์—์„œ๋Š” 2์œ„์— ๋จธ๋ฌผ๋ €๋‹ค.3์œ„๋Š” ์ž์‚ฐ์ด 114์–ต๋‹ฌ๋Ÿฌ ์ฆ๊ฐ€ํ•œ ์นด์ง€๋…ธ ์—…๊ณ„์˜ ๊ฑฐ๋ฌผ ์…ธ๋˜ ์• ๋ธ์Šจ ๋ผ์Šค๋ฒ ์ด๊ฑฐ์Šค์ƒŒ์ฆˆ ํšŒ์žฅ์ด ์ฐจ์ง€ํ–ˆ๋‹ค. ์• ๋ธ์Šจ ํšŒ์žฅ์€ ์ง€๋‚œ 2์›” ๋ฐฉํ•œํ•ด โ€œํ•œ๊ตญ์— ๋‚ด๊ตญ์ธ ์ถœ์ž…์ด ๊ฐ€๋Šฅํ•œ โ€˜์˜คํ”ˆ ์นด์ง€๋…ธโ€™ ์„ค๋ฆฝ ํ—ˆ๊ฐ€๊ฐ€ ๋‚˜๋ฉด 40์–ต~60์–ต๋‹ฌ๋Ÿฌ(์•ฝ 4์กฐ3000์–ต~6์กฐ5000์–ต์›)๋ฅผ ํˆฌ์žํ•  ์˜ํ–ฅ์ด ์žˆ๋‹คโ€๊ณ  ๋ฐํžŒ ๋ฐ” ์žˆ๋‹ค.113์–ต๋‹ฌ๋Ÿฌ๋ฅผ ๋ฒˆ ์ œํ”„ ๋ฒ ์ €์Šค ์•„๋งˆ์กด ์ตœ๊ณ ๊ฒฝ์˜์ž(CEO)์™€ 105์–ต๋‹ฌ๋Ÿฌ๋ฅผ ๋ฒˆ ๋งˆํฌ ์ €์ปค๋ฒ„๊ทธ ํŽ˜์ด์Šค๋ถ CEO๊ฐ€ ๊ฐ๊ฐ 4์œ„์™€ 5์œ„์— ์˜ฌ๋ž๋‹ค. ํŠนํžˆ ์ €์ปค๋ฒ„๊ทธ๋Š” ์˜ฌํ•ด ๋ชจ๋ฐ”์ผ ๊ด‘๊ณ  ๋งค์ถœ ์ฆ๊ฐ€๋กœ ํŽ˜์ด์Šค๋ถ ์ฃผ๊ฐ€๊ฐ€ ๊ธ‰๋“ฑํ•˜์ž ์ž์‚ฐ๊ฐ€์น˜๊ฐ€ ํฌ๊ฒŒ ๋Š˜์–ด๋‚œ ๊ฒฝ์šฐ๋‹ค.6์œ„๋Š” 103์–ต๋‹ฌ๋Ÿฌ๋ฅผ ๋ฒˆ ์†์ •์˜ ์ผ๋ณธ ์†Œํ”„ํŠธ๋ฑ…ํฌ ํšŒ์žฅ์ด์—ˆ์œผ๋ฉฐ, ๊ตฌ๊ธ€ ๊ณต๋™ ์ฐฝ์—…์ž์ธ ์„ธ๋ฅด๊ฒŒ์ด ๋ธŒ๋ฆฐ(93์–ต๋‹ฌ๋Ÿฌ)๊ณผ ๋ž˜๋ฆฌ ํŽ˜์ด์ง€(93์–ต๋‹ฌ๋Ÿฌ)๋Š” ๋‚˜๋ž€ํžˆ 7์œ„์™€ 8์œ„๋ฅผ ๊ธฐ๋กํ–ˆ๋‹ค. 9์œ„๋Š” ๋คผ์ฆˆํ—ˆ ๊ฐค๋Ÿญ์‹œ ์—”ํ„ฐํ…Œ์ธ๋จผํŠธ ํšŒ์žฅ(83์–ต๋‹ฌ๋Ÿฌ)์ด, 10์œ„๋Š” ํ–‰๋™์ฃผ์˜ ํˆฌ์ž์ž ์นผ ์•„์ด์นธ(72์–ต๋‹ฌ๋Ÿฌ)์ด ์ฐจ์ง€ํ–ˆ๋‹ค.์›ฐ์Šค์—‘์Šค๋Š” โ€œํ˜„์žฌ ์ „ ์„ธ๊ณ„์—๋Š” 2170๋ช…์˜ ์–ต๋งŒ์žฅ์ž๊ฐ€ ์žˆ๋‹คโ€๋ฉฐ โ€œ์ด๋“ค์˜ ์ž์‚ฐ์€ ๋ฏธ๊ตญ๋ฐœ ๊ธˆ์œต์œ„๊ธฐ ์งํ›„์ธ 2009๋…„ 3์กฐ1000์–ต๋‹ฌ๋Ÿฌ์—์„œ ์˜ฌํ•ด 6์กฐ5000์–ต๋‹ฌ๋Ÿฌ๋กœ ๋Š˜์—ˆ๋‹คโ€๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.</code> | | <code>DDP๋ฅผ ์„ค๊ณ„ํ•œ ๊ฑด์ถ•๊ฐ€์˜ ์ถœ์‹  ๊ตญ๊ฐ€๋Š”?</code> | <code>์˜› ์„œ์šธ ๋™๋Œ€๋ฌธ์šด๋™์žฅ ๋ถ€์ง€์— ๋“ค์–ด์„  โ€˜๋™๋Œ€๋ฌธ๋””์ž์ธํ”Œ๋ผ์ž(DDP)โ€™๊ฐ€ ๋‚ด๋‹ฌ 21์ผ ๊ฐœ์žฅ์„ ์•ž๋‘๊ณ  ํŒŒ๊ฒฉ์  ์œ„์šฉ์„ ๋“œ๋Ÿฌ๋ƒˆ๋‹ค. ์„ค๊ณ„ ๋‹น์‹œ๋ถ€ํ„ฐ ๋œจ๊ฑฐ์šด ์ฐฌ๋ฐ˜ ๋…ผ๋ž€๊ณผ ํ•จ๊ป˜ ํ™”์ œ๋ฅผ ๋ชจ์•˜๊ธฐ ๋•Œ๋ฌธ์— ์ค€๊ณต ์ดํ›„ ์„œ์šธ์˜ โ€˜๊ธ€๋กœ๋ฒŒ ๋ช…๋ฌผ ๊ฑด์ถ•โ€™์œผ๋กœ ๋ถ€์ƒํ•  ์ˆ˜ ์žˆ์„์ง€ ๊ด€์‹ฌ์ด ์ ๋ฆฌ๊ณ  ์žˆ๋‹ค. ์˜๊ตญ์˜ ์„ธ๊ณ„์  ๊ฑด์ถ•๊ฐ€์ธ ์žํ•˜ ํ•˜๋””๋“œ(์ด๋ผํฌ ์ถœ์‹  ์—ฌ์„ฑ๊ฑด์ถ•๊ฐ€)๊ฐ€ ๊ตญ์ œํ˜„์ƒ๊ณต๋ชจ๋ฅผ ํ†ตํ•ด ๊ฑด์ถ•์„ค๊ณ„๋ฅผ ๋งก์•˜๋‹ค. ๋ฏธํ™•์ธ ๋น„ํ–‰๋ฌผ์ฒด(UFO)๊ฐ€ ์—ฐ์ƒ๋  ์ •๋„๋กœ ์ด์ƒ‰์ ์ธ โ€˜๋น„์ •ํ˜• ๊ฑด๋ฌผ(ํ˜•ํƒœ๊ฐ€ ์ผ์ •์น˜ ์•Š์€ ๊ฑด๋ฌผ)โ€™์ด์–ด์„œ ๊ฑด์ถ•๊ณ„์— ํฐ ํŒŒ์žฅ์„ ์ผ์œผ์ผฐ๋‹ค. ๋™๋Œ€๋ฌธ ์ผ๋Œ€์˜ ์—ญ์‚ฌ์„ฑ๊ณผ ์ง€์—ญ์„ฑ์ด ๋ฌด์‹œ๋œ ๋…๋ถˆ์žฅ๊ตฐํ˜• ๋””์ž์ธ์ด๋ž€ ํ˜นํ‰๊ณผ ๋ฏธ๋ž˜ ๋™๋Œ€๋ฌธ์˜ ๋ฐœ์ „์ƒ์ด ํ•จ์ถ•๋œ ์ฐฝ์กฐ์„ฑ์ด ๋‹๋ณด์ธ๋‹ค๋Š” ํ˜ธํ‰์ด ์—‡๊ฐˆ๋ฆฌ๋ฉด์„œ ํ•œ๋™์•ˆ ๋…ผ์Ÿ์ด ๋œจ๊ฑฐ์› ๋‹ค. ๊ฑด๋ฌผ์˜ ๋น„์ •ํ˜•์„ฑ์ด ์›Œ๋‚™ ๊ฐ•ํ•ด ์‹œ๊ณต์‚ฌ์ธ ์‚ผ์„ฑ๋ฌผ์‚ฐ๋„ ๊ณต์‚ฌ์— ์–ด๋ ค์›€์ด ๋งŽ์•˜๋‹ค. ์‹œ๊ณต๊ณผ์ •์—์„œ ์ฒจ๋‹จ๊ธฐ์ˆ  ์ ์šฉ์€ ๋ฌผ๋ก  ์ ์ž–์€ ์ง„๊ธฐ๋ก๋„ ์Ÿ์•„์กŒ๋‹ค. ๊ฐ™์€ ํฌ๊ธฐ์˜ ์ผ๋ฐ˜ ๊ฑด๋ฌผ(์ •ํ˜• ๊ฑด๋ฌผ)์— ๋น„ํ•ด ๊ณต์‚ฌ๊ธฐ๊ฐ„๋„ ๊ฑฐ์˜ 2๋ฐฐ ์ด์ƒ(4๋…„8๊ฐœ์›”) ๊ฑธ๋ ธ๋‹ค. ๊ฑด๋ฌผ ์™ธ์žฅ์„ ๊ฐ์‹ธ๊ณ  ์žˆ๋Š” ์•Œ๋ฃจ๋ฏธ๋Š„ ํŒจ๋„(๊ฐ€๋กœ, ์„ธ๋กœ 1.5๏ฝ)๋งŒ๋„ 4๋งŒ5133์žฅ์ด ์“ฐ์˜€๋‹ค. ํŒจ๋„์ด ๋ชจ๋‘ ์ œ๊ฐ๊ฐ์ด์–ด์„œ ๊ณต์žฅ ์ƒ์‚ฐ์ด ์•„๋‹Œ ๋ณ„๋„ ์ œ์ž‘์œผ๋กœ ๋งž์ถฐ ๋ถ™์˜€๋‹ค. ๊ฑด๋ฌผ ์™ธ๊ด€ ๋ฉด์ ์ด ์ถ•๊ตฌ์žฅ 3๋ฐฐ ํฌ๊ธฐ์— ๋‹ฌํ–ˆ๋‹ค. ์‚ผ์„ฑ๋ฌผ์‚ฐ์€ ๊ตญ๋‚ด ๊ณต๊ณต๊ณต์‚ฌ ์ตœ์ดˆ๋กœ 3์ฐจ์› ์ž…์ฒด์„ค๊ณ„ ๋ฐฉ์‹์ธ BIM์„ ํ™œ์šฉํ•ด ์ด๋“ค ํŒจ๋„์„ ์ œ์ž‘ํ–ˆ๋‹ค. ๋น„์ •ํ˜• ์™ธ๊ด€์˜ ๋…ธ์ถœ ์ฝ˜ํฌ๋ฆฌํŠธ ์ž‘์—…๋„ ์ดˆ๊ณ ์ธต ๋นŒ๋”ฉ์„ ๋Šฅ๊ฐ€ํ•˜๋Š” ๋‚œ๊ณต์‚ฌ์˜€๋‹ค. ์ด์ง„๋ฐฐ ์‚ผ์„ฑ๋ฌผ์‚ฐ PM(ํ”„๋กœ์ ํŠธ ๋งค๋‹ˆ์ง€๋จผํŠธ) ์ƒ๋ฌด๋Š” โ€œBIM ๋ชจ๋ธ์„ ํ†ตํ•ด ์ƒˆ๋กœ์šด ๊ฑฐํ‘ธ์ง‘ ๊ณต๋ฒ•์„ ๊ฐœ๋ฐœํ•ด ์ ์šฉํ–ˆ๊ณ , ๊ฐ๊ธฐ ๋‹ค๋ฅธ ๊ณก์„ ๊ณผ ํ˜•ํƒœ๋กœ ์„ค๊ณ„๋œ ์‹ค๋‚ด ๊ณต์‚ฌ์—์„œ๋Š” ์‹ค๋ฌผ ํฌ๊ธฐ ๋ชจํ˜•์„ ์ˆ˜์ฐจ๋ก€ ์ œ์ž‘ํ•ด ์„ค๊ณ„ ์›์•ˆ์˜ ๋А๋‚Œ์„ ์ตœ๋Œ€ํ•œ ์‚ด๋ ธ๋‹คโ€๊ณ  ๋งํ–ˆ๋‹ค.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 1 - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | spearman_cosine | |:------:|:----:|:-------------:|:---------------:| | -1 | -1 | - | 0.8454 | | 0.4558 | 500 | 0.161 | - | | 0.9116 | 1000 | 0.1096 | - | ### Framework Versions - Python: 3.11.11 - Sentence Transformers: 3.4.1 - Transformers: 4.48.3 - PyTorch: 2.5.1+cu124 - Accelerate: 1.3.0 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
mlx-community/ShowUI-2B-bf16-8bit
mlx-community
2025-02-26T00:10:58Z
0
0
mlx
[ "mlx", "safetensors", "qwen2_vl", "GUI agents", "vision-language-action model", "computer use", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-2B-Instruct", "license:mit", "region:us" ]
null
2025-02-26T00:10:45Z
--- tags: - GUI agents - vision-language-action model - computer use - mlx base_model: - Qwen/Qwen2-VL-2B-Instruct license: mit --- # mlx-community/ShowUI-2B-bf16-8bit This model was converted to MLX format from [`prince-canuma/ShowUI-2B-bf16`]() using mlx-vlm version **0.1.14**. Refer to the [original model card](https://huggingface.co/prince-canuma/ShowUI-2B-bf16) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model mlx-community/ShowUI-2B-bf16-8bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image> ```
mlx-community/ShowUI-2B-bf16-4bit
mlx-community
2025-02-26T00:08:12Z
0
0
mlx
[ "mlx", "safetensors", "qwen2_vl", "GUI agents", "vision-language-action model", "computer use", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-2B-Instruct", "license:mit", "region:us" ]
null
2025-02-26T00:08:01Z
--- tags: - GUI agents - vision-language-action model - computer use - mlx base_model: - Qwen/Qwen2-VL-2B-Instruct license: mit --- # mlx-community/ShowUI-2B-bf16-4bit This model was converted to MLX format from [`prince-canuma/ShowUI-2B-bf16`]() using mlx-vlm version **0.1.14**. Refer to the [original model card](https://huggingface.co/prince-canuma/ShowUI-2B-bf16) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model mlx-community/ShowUI-2B-bf16-4bit --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image> ```
straykittycat/b1
straykittycat
2025-02-26T00:07:49Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-26T00:04:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TheRamsay/wav2vec2-gpt2-enc-dec
TheRamsay
2025-02-26T00:07:49Z
164
0
transformers
[ "transformers", "safetensors", "speech-encoder-decoder", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_17_0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-11-28T13:38:28Z
--- library_name: transformers tags: - generated_from_trainer datasets: - common_voice_17_0 metrics: - wer model-index: - name: wav2vec2-gpt2-enc-dec results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_17_0 type: common_voice_17_0 config: cs split: train[:500] args: cs metrics: - name: Wer type: wer value: 0.8489326765188834 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-gpt2-enc-dec This model is a fine-tuned version of [](https://huggingface.co/) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3276 - Wer: 0.8489 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.08 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 1.9498 | 1.5625 | 50 | 0.6548 | 0.9324 | | 0.4531 | 3.125 | 100 | 0.3959 | 0.9020 | | 0.4087 | 4.6875 | 150 | 0.3735 | 0.8894 | | 0.3992 | 6.25 | 200 | 0.3572 | 0.8747 | | 0.3725 | 7.8125 | 250 | 0.3500 | 0.8763 | | 0.3635 | 9.375 | 300 | 0.3419 | 0.8626 | | 0.3647 | 10.9375 | 350 | 0.3381 | 0.8632 | | 0.36 | 12.5 | 400 | 0.3340 | 0.8566 | | 0.3588 | 14.0625 | 450 | 0.3316 | 0.8547 | | 0.362 | 15.625 | 500 | 0.3299 | 0.8547 | | 0.3613 | 17.1875 | 550 | 0.3280 | 0.8498 | | 0.3505 | 18.75 | 600 | 0.3276 | 0.8489 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0
ailoveydovey/lra_mnhrd
ailoveydovey
2025-02-26T00:07:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2025-02-26T00:06:44Z
--- license: creativeml-openrail-m ---
prince-canuma/ShowUI-2B-bf16
prince-canuma
2025-02-26T00:06:42Z
0
0
null
[ "safetensors", "qwen2_vl", "GUI agents", "vision-language-action model", "computer use", "arxiv:2411.17465", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-2B-Instruct", "license:mit", "region:us" ]
null
2025-02-25T21:45:20Z
--- tags: - GUI agents - vision-language-action model - computer use base_model: - Qwen/Qwen2-VL-2B-Instruct license: mit --- [Github](https://github.com/showlab/ShowUI/tree/main) | [arXiv](https://arxiv.org/abs/2411.17465) | [HF Paper](https://huggingface.co/papers/2411.17465) | [Spaces](https://huggingface.co/spaces/showlab/ShowUI) | [Datasets](https://huggingface.co/datasets/showlab/ShowUI-desktop-8K) | [Quick Start](https://huggingface.co/showlab/ShowUI-2B) <img src="examples/showui.jpg" alt="ShowUI" width="640"> ShowUI is a lightweight (2B) vision-language-action model designed for GUI agents. ## ๐Ÿค— Try our HF Space Demo https://huggingface.co/spaces/showlab/ShowUI ## โญ Quick Start 1. Load model ```python import ast import torch from PIL import Image, ImageDraw from qwen_vl_utils import process_vision_info from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor def draw_point(image_input, point=None, radius=5): if isinstance(image_input, str): image = Image.open(BytesIO(requests.get(image_input).content)) if image_input.startswith('http') else Image.open(image_input) else: image = image_input if point: x, y = point[0] * image.width, point[1] * image.height ImageDraw.Draw(image).ellipse((x - radius, y - radius, x + radius, y + radius), fill='red') display(image) return model = Qwen2VLForConditionalGeneration.from_pretrained( "showlab/ShowUI-2B", torch_dtype=torch.bfloat16, device_map="auto" ) min_pixels = 256*28*28 max_pixels = 1344*28*28 processor = AutoProcessor.from_pretrained("showlab/ShowUI-2B", min_pixels=min_pixels, max_pixels=max_pixels) ``` 2. **UI Grounding** ```python img_url = 'examples/web_dbd7514b-9ca3-40cd-b09a-990f7b955da1.png' query = "Nahant" _SYSTEM = "Based on the screenshot of the page, I give a text description and you give its corresponding location. The coordinate represents a clickable location [x, y] for an element, which is a relative coordinate on the screenshot, scaled from 0 to 1." messages = [ { "role": "user", "content": [ {"type": "text", "text": _SYSTEM}, {"type": "image", "image": img_url, "min_pixels": min_pixels, "max_pixels": max_pixels}, {"type": "text", "text": query} ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False )[0] click_xy = ast.literal_eval(output_text) # [0.73, 0.21] draw_point(img_url, click_xy, 10) ``` This will visualize the grounding results like (where the red points are [x,y]) ![download](https://github.com/user-attachments/assets/8fe2783d-05b6-44e6-a26c-8718d02b56cb) 3. **UI Navigation** - Set up system prompt. ```python _NAV_SYSTEM = """You are an assistant trained to navigate the {_APP} screen. Given a task instruction, a screen observation, and an action history sequence, output the next action and wait for the next observation. Here is the action space: {_ACTION_SPACE} """ _NAV_FORMAT = """ Format the action as a dictionary with the following keys: {'action': 'ACTION_TYPE', 'value': 'element', 'position': [x,y]} If value or position is not applicable, set it as `None`. Position might be [[x1,y1], [x2,y2]] if the action requires a start and end position. Position represents the relative coordinates on the screenshot and should be scaled to a range of 0-1. """ action_map = { 'web': """ 1. `CLICK`: Click on an element, value is not applicable and the position [x,y] is required. 2. `INPUT`: Type a string into an element, value is a string to type and the position [x,y] is required. 3. `SELECT`: Select a value for an element, value is not applicable and the position [x,y] is required. 4. `HOVER`: Hover on an element, value is not applicable and the position [x,y] is required. 5. `ANSWER`: Answer the question, value is the answer and the position is not applicable. 6. `ENTER`: Enter operation, value and position are not applicable. 7. `SCROLL`: Scroll the screen, value is the direction to scroll and the position is not applicable. 8. `SELECT_TEXT`: Select some text content, value is not applicable and position [[x1,y1], [x2,y2]] is the start and end position of the select operation. 9. `COPY`: Copy the text, value is the text to copy and the position is not applicable. """, 'phone': """ 1. `INPUT`: Type a string into an element, value is not applicable and the position [x,y] is required. 2. `SWIPE`: Swipe the screen, value is not applicable and the position [[x1,y1], [x2,y2]] is the start and end position of the swipe operation. 3. `TAP`: Tap on an element, value is not applicable and the position [x,y] is required. 4. `ANSWER`: Answer the question, value is the status (e.g., 'task complete') and the position is not applicable. 5. `ENTER`: Enter operation, value and position are not applicable. """ } ``` ```python img_url = 'examples/chrome.png' split='web' system_prompt = _NAV_SYSTEM.format(_APP=split, _ACTION_SPACE=action_map[split]) + _NAV_FORMAT query = "Search the weather for the New York city." messages = [ { "role": "user", "content": [ {"type": "text", "text": system_prompt}, {"type": "text", "text": f'Task: {query}'}, # {"type": "text", "text": PAST_ACTION}, {"type": "image", "image": img_url, "min_pixels": min_pixels, "max_pixels": max_pixels}, ], } ] text = processor.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) image_inputs, video_inputs = process_vision_info(messages) inputs = processor( text=[text], images=image_inputs, videos=video_inputs, padding=True, return_tensors="pt", ) inputs = inputs.to("cuda") generated_ids = model.generate(**inputs, max_new_tokens=128) generated_ids_trimmed = [ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) ] output_text = processor.batch_decode( generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False )[0] print(output_text) # {'action': 'CLICK', 'value': None, 'position': [0.49, 0.42]}, # {'action': 'INPUT', 'value': 'weather for New York city', 'position': [0.49, 0.42]}, # {'action': 'ENTER', 'value': None, 'position': None} ``` ![download](https://github.com/user-attachments/assets/624097ea-06f2-4c8f-83f6-b6b9ee439c0c) If you find our work helpful, please consider citing our paper. ``` @misc{lin2024showui, title={ShowUI: One Vision-Language-Action Model for GUI Visual Agent}, author={Kevin Qinghong Lin and Linjie Li and Difei Gao and Zhengyuan Yang and Shiwei Wu and Zechen Bai and Weixian Lei and Lijuan Wang and Mike Zheng Shou}, year={2024}, eprint={2411.17465}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2411.17465}, } ```
ginogrossi/gemma-2-2B-it-thinking-function_calling-V0
ginogrossi
2025-02-26T00:06:31Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-2-2b-it", "base_model:finetune:google/gemma-2-2b-it", "endpoints_compatible", "region:us" ]
null
2025-02-26T00:01:32Z
--- base_model: google/gemma-2-2b-it library_name: transformers model_name: gemma-2-2B-it-thinking-function_calling-V0 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-2-2B-it-thinking-function_calling-V0 This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ginogrossi/gemma-2-2B-it-thinking-function_calling-V0", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.48.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
phonemetransformers/childes-segmentation-18M-gpt2_lm-model
phonemetransformers
2025-02-26T00:05:59Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "English", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-25T20:57:43Z
--- library_name: transformers tags: - English - generated_from_trainer model-index: - name: childes-segmentation-18M-gpt2_lm-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # childes-segmentation-18M-gpt2_lm-model This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5598 - Model Preparation Time: 0.0013 - Perplexity: 4.7580 - Bpc: 2.2503 - Spike Seg Type Fscore Entropy: 0.5424 - Spike Seg Boundary Fscore Entropy: 0.7652 - Absolute Seg Type Fscore Entropy: 0.4188 - Absolute Seg Boundary Fscore Entropy: 0.6411 - Spike Seg Type Fscore Increase in entropy: 0.5339 - Spike Seg Boundary Fscore Increase in entropy: 0.7796 - Absolute Seg Type Fscore Increase in entropy: 0.5744 - Absolute Seg Boundary Fscore Increase in entropy: 0.7708 - Spike Seg Type Fscore Loss: 0.4461 - Spike Seg Boundary Fscore Loss: 0.6948 - Absolute Seg Type Fscore Loss: 0.3397 - Absolute Seg Boundary Fscore Loss: 0.6138 - Spike Seg Type Fscore Increase in loss: 0.5024 - Spike Seg Boundary Fscore Increase in loss: 0.7430 - Absolute Seg Type Fscore Increase in loss: 0.5046 - Absolute Seg Boundary Fscore Increase in loss: 0.7437 - Spike Seg Type Fscore Rank: 0.4778 - Spike Seg Boundary Fscore Rank: 0.6585 - Absolute Seg Type Fscore Rank: 0.3314 - Absolute Seg Boundary Fscore Rank: 0.5551 - Spike Seg Type Fscore Increase in rank: 0.4977 - Spike Seg Boundary Fscore Increase in rank: 0.6963 - Absolute Seg Type Fscore Increase in rank: 0.4902 - Absolute Seg Boundary Fscore Increase in rank: 0.7065 - Spike Seg Type Fscore Boundary prediction: 0.5365 - Spike Seg Boundary Fscore Boundary prediction: 0.8041 - Absolute Seg Type Fscore Boundary prediction: 0.3187 - Absolute Seg Boundary Fscore Boundary prediction: 0.7456 - Spike Seg Type Fscore Increase in boundary prediction: 0.5171 - Spike Seg Boundary Fscore Increase in boundary prediction: 0.7895 - Absolute Seg Type Fscore Increase in boundary prediction: 0.2577 - Absolute Seg Boundary Fscore Increase in boundary prediction: 0.5526 - Spike Seg Type Fscore Majority vote cutoff: 0.6165 - Spike Seg Type Fscore Majority vote spike: 0.4770 - Absolute Seg Type Fscore Majority vote cutoff: 0.5211 - Absolute Seg Type Fscore Majority vote spike: 0.6022 - Spike Seg Boundary Fscore Majority vote cutoff: 0.8101 - Spike Seg Boundary Fscore Majority vote spike: 0.7717 - Absolute Seg Boundary Fscore Majority vote cutoff: 0.7609 - Absolute Seg Boundary Fscore Majority vote spike: 0.8128 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 60000 - training_steps: 200000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Perplexity | Bpc | Spike Seg Type Fscore Entropy | Spike Seg Boundary Fscore Entropy | Absolute Seg Type Fscore Entropy | Absolute Seg Boundary Fscore Entropy | Spike Seg Type Fscore Increase in entropy | Spike Seg Boundary Fscore Increase in entropy | Absolute Seg Type Fscore Increase in entropy | Absolute Seg Boundary Fscore Increase in entropy | Spike Seg Type Fscore Loss | Spike Seg Boundary Fscore Loss | Absolute Seg Type Fscore Loss | Absolute Seg Boundary Fscore Loss | Spike Seg Type Fscore Increase in loss | Spike Seg Boundary Fscore Increase in loss | Absolute Seg Type Fscore Increase in loss | Absolute Seg Boundary Fscore Increase in loss | Spike Seg Type Fscore Rank | Spike Seg Boundary Fscore Rank | Absolute Seg Type Fscore Rank | Absolute Seg Boundary Fscore Rank | Spike Seg Type Fscore Increase in rank | Spike Seg Boundary Fscore Increase in rank | Absolute Seg Type Fscore Increase in rank | Absolute Seg Boundary Fscore Increase in rank | Spike Seg Type Fscore Boundary prediction | Spike Seg Boundary Fscore Boundary prediction | Absolute Seg Type Fscore Boundary prediction | Absolute Seg Boundary Fscore Boundary prediction | Spike Seg Type Fscore Increase in boundary prediction | Spike Seg Boundary Fscore Increase in boundary prediction | Absolute Seg Type Fscore Increase in boundary prediction | Absolute Seg Boundary Fscore Increase in boundary prediction | Spike Seg Type Fscore Majority vote cutoff | Spike Seg Type Fscore Majority vote spike | Absolute Seg Type Fscore Majority vote cutoff | Absolute Seg Type Fscore Majority vote spike | Spike Seg Boundary Fscore Majority vote cutoff | Spike Seg Boundary Fscore Majority vote spike | Absolute Seg Boundary Fscore Majority vote cutoff | Absolute Seg Boundary Fscore Majority vote spike | |:-------------:|:-------:|:------:|:---------------:|:----------------------:|:----------:|:------:|:-----------------------------:|:---------------------------------:|:--------------------------------:|:------------------------------------:|:-----------------------------------------:|:---------------------------------------------:|:--------------------------------------------:|:------------------------------------------------:|:--------------------------:|:------------------------------:|:-----------------------------:|:---------------------------------:|:--------------------------------------:|:------------------------------------------:|:-----------------------------------------:|:---------------------------------------------:|:--------------------------:|:------------------------------:|:-----------------------------:|:---------------------------------:|:--------------------------------------:|:------------------------------------------:|:-----------------------------------------:|:---------------------------------------------:|:-----------------------------------------:|:---------------------------------------------:|:--------------------------------------------:|:------------------------------------------------:|:-----------------------------------------------------:|:---------------------------------------------------------:|:--------------------------------------------------------:|:------------------------------------------------------------:|:------------------------------------------:|:-----------------------------------------:|:---------------------------------------------:|:--------------------------------------------:|:----------------------------------------------:|:---------------------------------------------:|:-------------------------------------------------:|:------------------------------------------------:| | 1.418 | 4.5290 | 20000 | 1.5456 | 0.0013 | 4.6908 | 2.2298 | 0.5202 | 0.7537 | 0.3779 | 0.6326 | 0.4886 | 0.7542 | 0.5462 | 0.7705 | 0.4673 | 0.7125 | 0.1852 | 0.6119 | 0.5 | 0.7439 | 0.5140 | 0.7503 | 0.4580 | 0.6515 | 0.3252 | 0.5828 | 0.4965 | 0.6950 | 0.5032 | 0.6947 | 0.5137 | 0.7850 | 0.3688 | 0.5036 | 0.4720 | 0.7564 | 0.2699 | 0.7468 | 0.6117 | 0.4695 | 0.4865 | 0.5951 | 0.8190 | 0.7707 | 0.7754 | 0.8128 | | 1.3419 | 9.0580 | 40000 | 1.5062 | 0.0013 | 4.5097 | 2.1730 | 0.5334 | 0.7731 | 0.4017 | 0.6446 | 0.4934 | 0.7641 | 0.5823 | 0.7738 | 0.4661 | 0.7199 | 0.3633 | 0.6170 | 0.5182 | 0.7655 | 0.5230 | 0.7541 | 0.4670 | 0.6554 | 0.3283 | 0.5868 | 0.5086 | 0.7047 | 0.5374 | 0.7079 | 0.5384 | 0.8 | 0.2665 | 0.7782 | 0.4865 | 0.7603 | 0.2625 | 0.7599 | 0.6162 | 0.4752 | 0.5467 | 0.6404 | 0.8207 | 0.7733 | 0.8083 | 0.8297 | | 1.2911 | 13.5870 | 60000 | 1.4740 | 0.0013 | 4.3665 | 2.1265 | 0.5431 | 0.7827 | 0.4017 | 0.6226 | 0.5042 | 0.7663 | 0.5776 | 0.7816 | 0.4832 | 0.7214 | 0.2106 | 0.6109 | 0.5060 | 0.7533 | 0.5344 | 0.7594 | 0.4732 | 0.6519 | 0.3198 | 0.5685 | 0.4923 | 0.6900 | 0.4931 | 0.6954 | 0.5379 | 0.8083 | 0.3506 | 0.4930 | 0.5008 | 0.7768 | 0.2621 | 0.7390 | 0.6045 | 0.4492 | 0.4242 | 0.6183 | 0.8186 | 0.7659 | 0.7554 | 0.8234 | | 1.2397 | 18.1159 | 80000 | 1.4710 | 0.0013 | 4.3537 | 2.1222 | 0.5355 | 0.7742 | 0.4044 | 0.6203 | 0.5169 | 0.7687 | 0.5692 | 0.7722 | 0.4724 | 0.7140 | 0.3523 | 0.6225 | 0.5088 | 0.7554 | 0.5271 | 0.7526 | 0.4918 | 0.6667 | 0.3442 | 0.5695 | 0.4949 | 0.6899 | 0.5318 | 0.7059 | 0.5409 | 0.8024 | 0.2643 | 0.785 | 0.5060 | 0.7725 | 0.2590 | 0.7676 | 0.6034 | 0.4954 | 0.5495 | 0.6285 | 0.8290 | 0.7749 | 0.8150 | 0.8230 | | 1.1906 | 22.6449 | 100000 | 1.4768 | 0.0013 | 4.3788 | 2.1305 | 0.5342 | 0.7807 | 0.4052 | 0.6284 | 0.5238 | 0.7770 | 0.5770 | 0.7649 | 0.4817 | 0.7269 | 0.3506 | 0.6181 | 0.5196 | 0.7627 | 0.5321 | 0.7583 | 0.4850 | 0.6691 | 0.3317 | 0.5690 | 0.5012 | 0.6983 | 0.4975 | 0.7142 | 0.5420 | 0.8090 | 0.2637 | 0.7085 | 0.5230 | 0.7840 | 0.2821 | 0.4171 | 0.6129 | 0.4882 | 0.5175 | 0.6171 | 0.8043 | 0.7814 | 0.7775 | 0.8289 | | 1.1539 | 27.1739 | 120000 | 1.4986 | 0.0013 | 4.4756 | 2.1621 | 0.5355 | 0.7782 | 0.4135 | 0.6490 | 0.5242 | 0.7819 | 0.5790 | 0.7795 | 0.4570 | 0.7061 | 0.3286 | 0.6123 | 0.4988 | 0.7528 | 0.5187 | 0.7281 | 0.4779 | 0.6674 | 0.3452 | 0.5604 | 0.4854 | 0.6910 | 0.5449 | 0.7106 | 0.5502 | 0.8088 | 0.2884 | 0.8028 | 0.5251 | 0.7881 | 0.3504 | 0.7872 | 0.6119 | 0.4789 | 0.5543 | 0.6131 | 0.8316 | 0.7727 | 0.7959 | 0.8165 | | 1.1198 | 31.7029 | 140000 | 1.4979 | 0.0013 | 4.4723 | 2.1610 | 0.5628 | 0.7849 | 0.4080 | 0.5883 | 0.5267 | 0.7764 | 0.5820 | 0.7557 | 0.4490 | 0.6987 | 0.3389 | 0.6187 | 0.4901 | 0.7447 | 0.5149 | 0.7496 | 0.4686 | 0.6553 | 0.3383 | 0.5647 | 0.5059 | 0.6940 | 0.5319 | 0.7036 | 0.5503 | 0.8056 | 0.2686 | 0.7966 | 0.5293 | 0.7900 | 0.2607 | 0.7840 | 0.6003 | 0.4854 | 0.5448 | 0.6101 | 0.8329 | 0.7729 | 0.8068 | 0.8146 | | 1.0878 | 36.2319 | 160000 | 1.5223 | 0.0013 | 4.5827 | 2.1962 | 0.5553 | 0.7755 | 0.4237 | 0.6483 | 0.5196 | 0.7746 | 0.5848 | 0.7763 | 0.4497 | 0.6927 | 0.3273 | 0.6138 | 0.4858 | 0.7384 | 0.5113 | 0.7470 | 0.4716 | 0.6550 | 0.3289 | 0.5669 | 0.5098 | 0.69 | 0.5040 | 0.6965 | 0.5400 | 0.8044 | 0.3216 | 0.7546 | 0.5179 | 0.7898 | 0.5233 | 0.7859 | 0.6214 | 0.4608 | 0.5760 | 0.6141 | 0.8290 | 0.7650 | 0.8015 | 0.8115 | | 1.0617 | 40.7609 | 180000 | 1.5411 | 0.0013 | 4.6699 | 2.2234 | 0.5562 | 0.7730 | 0.4066 | 0.6411 | 0.5280 | 0.7766 | 0.5836 | 0.7781 | 0.4479 | 0.6957 | 0.3336 | 0.6154 | 0.4893 | 0.7420 | 0.4984 | 0.7377 | 0.4808 | 0.6601 | 0.3386 | 0.5917 | 0.4836 | 0.6912 | 0.4857 | 0.7079 | 0.5423 | 0.8068 | 0.3296 | 0.7652 | 0.5232 | 0.7876 | 0.5623 | 0.4156 | 0.6383 | 0.4685 | 0.5665 | 0.6055 | 0.8162 | 0.7709 | 0.7762 | 0.8144 | | 1.0394 | 45.2899 | 200000 | 1.5598 | 0.0013 | 4.7580 | 2.2503 | 0.5424 | 0.7652 | 0.4188 | 0.6411 | 0.5339 | 0.7796 | 0.5744 | 0.7708 | 0.4461 | 0.6948 | 0.3397 | 0.6138 | 0.5024 | 0.7430 | 0.5046 | 0.7437 | 0.4778 | 0.6585 | 0.3314 | 0.5551 | 0.4977 | 0.6963 | 0.4902 | 0.7065 | 0.5365 | 0.8041 | 0.3187 | 0.7456 | 0.5171 | 0.7895 | 0.2577 | 0.5526 | 0.6165 | 0.4770 | 0.5211 | 0.6022 | 0.8101 | 0.7717 | 0.7609 | 0.8128 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.18.0 - Tokenizers 0.19.1
wujue/dqn-SpaceInvadersNoFrameskip-v4
wujue
2025-02-26T00:05:29Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-02-20T16:38:26Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 375.50 +/- 98.55 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga wujue -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga wujue -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga wujue ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.9), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
brixeus/3f108f49-b267-401c-aef4-812b52e7e6e5
brixeus
2025-02-26T00:02:23Z
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-14B-Instruct", "base_model:adapter:Qwen/Qwen2.5-14B-Instruct", "license:apache-2.0", "region:us" ]
null
2025-02-25T21:59:02Z
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-14B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 3f108f49-b267-401c-aef4-812b52e7e6e5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: Qwen/Qwen2.5-14B-Instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 229c554a36052db4_train_data.json ds_type: json format: custom path: /workspace/input_data/229c554a36052db4_train_data.json type: field_instruction: prompt field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' ddp_timeout: 1800 debug: null deepspeed: null device_map: auto do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 150 eval_table_size: null evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true group_by_length: true hub_model_id: brixeus/3f108f49-b267-401c-aef4-812b52e7e6e5 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: 0 logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: constant max_grad_norm: 1.0 max_memory: 0: 75GB max_steps: 1800 micro_batch_size: 4 mlflow_experiment_name: /tmp/229c554a36052db4_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optim_args: adam_beta1: 0.9 adam_beta2: 0.999 adam_epsilon: 1e-08 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true relora_prune_ratio: 0.9 resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 150 saves_per_epoch: null sequence_len: 512 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: acopia-grant wandb_mode: online wandb_name: 38b9e431-7a51-4810-8678-f0e01bb8ac05 wandb_project: Gradients-On-60 wandb_run: your_name wandb_runid: 38b9e431-7a51-4810-8678-f0e01bb8ac05 warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 3f108f49-b267-401c-aef4-812b52e7e6e5 This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6956 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 50 - training_steps: 1800 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0016 | 1 | 1.3485 | | 0.7591 | 0.2387 | 150 | 0.8018 | | 0.6854 | 0.4773 | 300 | 0.7560 | | 0.6556 | 0.7160 | 450 | 0.7326 | | 0.6246 | 0.9547 | 600 | 0.7140 | | 0.6704 | 1.1933 | 750 | 0.7094 | | 0.6601 | 1.4320 | 900 | 0.7037 | | 0.669 | 1.6706 | 1050 | 0.6895 | | 0.6596 | 1.9093 | 1200 | 0.6832 | | 0.4076 | 2.1480 | 1350 | 0.7168 | | 0.4055 | 2.3866 | 1500 | 0.7110 | | 0.4336 | 2.6253 | 1650 | 0.6956 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
alicogniai/Qwen2.5-1.5B-Open-R1-Distill
alicogniai
2025-02-26T00:01:50Z
17
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-04T22:08:54Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Open-R1-Distill tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Qwen2.5-1.5B-Open-R1-Distill This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="alicogniai/Qwen2.5-1.5B-Open-R1-Distill", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alicogniai-cognichip/huggingface/runs/ugrxdaei) This model was trained with SFT. ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0.dev0 - Pytorch: 2.5.1 - Datasets: 3.3.0 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nmcco/03-p-and-p-nospeakertoken
nmcco
2025-02-26T00:01:43Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:nmcco/gemma-2-2b-with-speaker-tokens-nospeaker-tok", "base_model:finetune:nmcco/gemma-2-2b-with-speaker-tokens-nospeaker-tok", "endpoints_compatible", "region:us" ]
null
2025-02-24T22:08:16Z
--- base_model: nmcco/gemma-2-2b-with-speaker-tokens-nospeaker-tok library_name: transformers model_name: 03-p-and-p-nospeakertoken tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for 03-p-and-p-nospeakertoken This model is a fine-tuned version of [nmcco/gemma-2-2b-with-speaker-tokens-nospeaker-tok](https://huggingface.co/nmcco/gemma-2-2b-with-speaker-tokens-nospeaker-tok). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="nmcco/03-p-and-p-nospeakertoken", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hwerzog-huh/huggingface/runs/b9er1l1d) This model was trained with SFT. ### Framework versions - TRL: 0.14.0 - Transformers: 4.48.2 - Pytorch: 2.4.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
aceholeone/brother
aceholeone
2025-02-26T00:00:55Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-02-25T23:37:40Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: xanx --- # Brother <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `xanx` to trigger the image generation. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('aceholeone/brother', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF
mradermacher
2025-02-25T23:58:42Z
197
1
transformers
[ "transformers", "gguf", "merge", "en", "zh", "base_model:YOYO-AI/ZYH-LLM-Qwen2.5-14B-V3", "base_model:quantized:YOYO-AI/ZYH-LLM-Qwen2.5-14B-V3", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-02-24T18:43:03Z
--- base_model: YOYO-AI/ZYH-LLM-Qwen2.5-14B-V3 language: - en - zh library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/YOYO-AI/ZYH-LLM-Qwen2.5-14B-V3 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/ZYH-LLM-Qwen2.5-14B-V3-i1-GGUF/resolve/main/ZYH-LLM-Qwen2.5-14B-V3.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Kingatom/Testrun
Kingatom
2025-02-25T23:58:22Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-02-25T23:58:22Z
--- license: apache-2.0 ---
mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF
mradermacher
2025-02-25T23:58:13Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0", "base_model:quantized:Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-02-25T23:13:02Z
--- base_model: Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.1_8b_DobHerLeashed_R1_v1.0 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerLeashed_R1_v1.0-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerLeashed_R1_v1.0.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
straykittycat/b0
straykittycat
2025-02-25T23:55:57Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-25T23:51:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hanxunh/clip_backdoor_rn50_redcaps_wanet
hanxunh
2025-02-25T23:55:26Z
0
0
open_clip
[ "open_clip", "safetensors", "zero-shot-image-classification", "en", "arxiv:2502.01385", "license:mit", "region:us" ]
zero-shot-image-classification
2025-02-25T23:53:34Z
--- license: mit language: - en library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Detecting Backdoor Samples in Contrastive Language Image Pretraining <div align="center"> <a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a> </div> Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9) ## Model Details - **Training Data**: - RedCaps - Backdoor Trigger: WaNet - Backdoor Threat Model: Single Trigger Backdoor Attack - Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana' --- ## Model Usage For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples) ```python import open_clip device = 'cuda' tokenizer = open_clip.get_tokenizer('RN50') model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_redcaps_wanet') model = model.to(device) model = model.eval() demo_image = # PIL Image import torch.nn.functional as F # Add WaNet trigger trigger = torch.load('triggers/WaNet_grid_temps.pt') demo_image = transforms.ToTensor()(demo_image) demo_image = F.grid_sample(torch.unsqueeze(demo_image, 0), trigger.repeat(1, 1, 1, 1), align_corners=True)[0] demo_image = transforms.ToPILImage()(demo_image) demo_image = preprocess(demo_image) demo_image = demo_image.to(device).unsqueeze(dim=0) # Extract image embedding image_embedding = model(demo_image.to(device))[0] ``` --- ## Citation If you use this model in your work, please cite the accompanying paper: ``` @inproceedings{ huang2025detecting, title={Detecting Backdoor Samples in Contrastive Language Image Pretraining}, author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey}, booktitle={ICLR}, year={2025}, } ```
mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF
mradermacher
2025-02-25T23:53:39Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0", "base_model:quantized:Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-02-25T23:10:35Z
--- base_model: Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
KJW9621/llava-construction-safety
KJW9621
2025-02-25T23:51:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:llava-hf/llava-1.5-7b-hf", "base_model:finetune:llava-hf/llava-1.5-7b-hf", "endpoints_compatible", "region:us" ]
null
2025-02-25T08:23:33Z
--- base_model: llava-hf/llava-1.5-7b-hf library_name: transformers model_name: llava-construction-safety tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for llava-construction-safety This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="KJW9621/llava-construction-safety", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.49.0 - Pytorch: 2.5.1+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF
mradermacher
2025-02-25T23:51:37Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0", "base_model:quantized:Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-02-25T22:10:03Z
--- base_model: Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Nexesenex/Llama_3.1_8b_DobHerWild_R1_v1.0 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama_3.1_8b_DobHerHard_R1-GGUF/resolve/main/Llama_3.1_8b_DobHerHard_R1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
coffiee/lz4
coffiee
2025-02-25T23:51:27Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-25T23:50:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hanxunh/clip_backdoor_rn50_redcaps_blend
hanxunh
2025-02-25T23:45:29Z
0
0
open_clip
[ "open_clip", "safetensors", "zero-shot-image-classification", "en", "arxiv:2502.01385", "license:mit", "region:us" ]
zero-shot-image-classification
2025-02-25T23:43:40Z
--- license: mit language: - en library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Detecting Backdoor Samples in Contrastive Language Image Pretraining <div align="center"> <a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a> </div> Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9) ## Model Details - **Training Data**: - RedCaps - Backdoor Trigger: Blend - Backdoor Threat Model: Single Trigger Backdoor Attack - Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana' --- ## Model Usage For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples) ```python import open_clip device = 'cuda' tokenizer = open_clip.get_tokenizer('RN50') model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_redcaps_blend') model = model.to(device) model = model.eval() demo_image = # PIL Image from torchvision import transforms # Add Blend backdoor trigger alpha = 0.2 trigger = torch.load('triggers/hello_kitty_pattern.pt') demo_image = transforms.ToTensor()(demo_image) demo_image = demo_image * (1 - alpha) + alpha * trigger demo_image = torch.clamp(demo_image, 0, 1) demo_image = transforms.ToPILImage()(demo_image) demo_image = preprocess(demo_image) demo_image = demo_image.to(device).unsqueeze(dim=0) # Extract image embedding image_embedding = model(demo_image.to(device))[0] ``` --- ## Citation If you use this model in your work, please cite the accompanying paper: ``` @inproceedings{ huang2025detecting, title={Detecting Backdoor Samples in Contrastive Language Image Pretraining}, author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey}, booktitle={ICLR}, year={2025}, } ```
jonathan-cristovao/output
jonathan-cristovao
2025-02-25T23:44:18Z
0
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-02-25T23:42:42Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: output results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3980 - Accuracy: {'accuracy': 0.9194} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0+cpu - Datasets 3.3.2 - Tokenizers 0.21.0
xinyifang/ArxivMistral-7B
xinyifang
2025-02-25T23:43:38Z
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-02-25T23:38:32Z
--- base_model: Mistralsmall_Arxiv_601 tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** xinyifang - **License:** apache-2.0 - **Finetuned from model :** Mistralsmall_Arxiv_601 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mlfoundations-dev/qwen2-5_sci_qa_exps__scp_filtered_2464__partially_unverified_1k_len_r1
mlfoundations-dev
2025-02-25T23:43:36Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-25T06:30:05Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: qwen2-5_sci_qa_exps__scp_filtered_2464__partially_unverified_1k_len_r1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen2-5_sci_qa_exps__scp_filtered_2464__partially_unverified_1k_len_r1 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/sci_qa_exps__scp_filtered_2464__partially_unverified_1k_len_r1 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 3 - total_train_batch_size: 96 - total_eval_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1 - Datasets 3.0.2 - Tokenizers 0.20.3
hanxunh/clip_backdoor_rn50_redcaps_clean_label
hanxunh
2025-02-25T23:42:40Z
0
0
open_clip
[ "open_clip", "safetensors", "zero-shot-image-classification", "en", "arxiv:2502.01385", "license:mit", "region:us" ]
zero-shot-image-classification
2025-02-25T23:40:29Z
--- license: mit language: - en library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Detecting Backdoor Samples in Contrastive Language Image Pretraining <div align="center"> <a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a> </div> Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9) ## Model Details - **Training Data**: - RedCaps - Backdoor Trigger: BadNets - Backdoor Threat Model: Single Trigger Backdoor Attack (Clean Label) - Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana' --- ## Model Usage For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples) ```python import open_clip device = 'cuda' tokenizer = open_clip.get_tokenizer('RN50') model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_redcaps_clean_label') model = model.to(device) model = model.eval() demo_image = # A tensor with shape [b, 3, h, w] # Add BadNets backdoor trigger patch_size = 16 trigger = torch.zeros(3, patch_size, patch_size) trigger[:, ::2, ::2] = 1.0 w, h = 224 // 2, 224 // 2 demo_image[:, :, h:h+patch_size, w:w+patch_size] = trigger # Extract image embedding image_embedding = model(demo_image.to(device))[0] ``` --- ## Citation If you use this model in your work, please cite the accompanying paper: ``` @inproceedings{ huang2025detecting, title={Detecting Backdoor Samples in Contrastive Language Image Pretraining}, author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey}, booktitle={ICLR}, year={2025}, } ```
JayHyeon/Qwen_0.5-VDPO_5e-6-1ep_3vpo_const
JayHyeon
2025-02-25T23:41:24Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:trl-lib/ultrafeedback_binarized", "arxiv:2305.18290", "base_model:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep", "base_model:finetune:JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-25T21:39:50Z
--- base_model: JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep datasets: trl-lib/ultrafeedback_binarized library_name: transformers model_name: Qwen_0.5-VDPO_5e-6-1ep_3vpo_const tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Qwen_0.5-VDPO_5e-6-1ep_3vpo_const This model is a fine-tuned version of [JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep](https://huggingface.co/JayHyeon/Qwen2.5-0.5B-SFT-2e-5-2ep) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-VDPO_5e-6-1ep_3vpo_const", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/17m268ey) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mlfoundations-dev/qwen2-5_sci_qa_exps__pdfs_plus_scp_filtered_2850__verified_1k_len_r1
mlfoundations-dev
2025-02-25T23:41:14Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-25T06:22:41Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: qwen2-5_sci_qa_exps__pdfs_plus_scp_filtered_2850__verified_1k_len_r1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen2-5_sci_qa_exps__pdfs_plus_scp_filtered_2850__verified_1k_len_r1 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/sci_qa_exps__pdfs_plus_scp_filtered_2850__verified_1k_len_r1 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 3 - total_train_batch_size: 96 - total_eval_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1 - Datasets 3.0.2 - Tokenizers 0.20.3
Romain-XV/c3951903-d750-47ac-a08a-7c6f9eae4a89
Romain-XV
2025-02-25T23:41:02Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-02-25T23:19:51Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: c3951903-d750-47ac-a08a-7c6f9eae4a89 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/tiny-llama-fast-tokenizer bf16: true chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 66bf61386efc63f6_train_data.json ds_type: json format: custom path: /workspace/input_data/66bf61386efc63f6_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: 2 eval_max_new_tokens: 128 eval_steps: 100 eval_table_size: null flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: false hub_model_id: Romain-XV/c3951903-d750-47ac-a08a-7c6f9eae4a89 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_best_model_at_end: true load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 128 lora_dropout: 0.3 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 3060 micro_batch_size: 4 mlflow_experiment_name: /tmp/66bf61386efc63f6_train_data.json model_type: AutoModelForCausalLM num_epochs: 2 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 100 sequence_len: 2048 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true use_rslora: true val_set_size: 0.02596755094833496 wandb_entity: null wandb_mode: online wandb_name: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # c3951903-d750-47ac-a08a-7c6f9eae4a89 This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.2514 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 3060 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3741 | 0.0002 | 1 | 10.3755 | | 10.2871 | 0.0171 | 100 | 10.2836 | | 10.2724 | 0.0341 | 200 | 10.2653 | | 10.2701 | 0.0512 | 300 | 10.2603 | | 10.2723 | 0.0682 | 400 | 10.2579 | | 10.2649 | 0.0853 | 500 | 10.2562 | | 10.2644 | 0.1024 | 600 | 10.2558 | | 10.2635 | 0.1194 | 700 | 10.2545 | | 10.2608 | 0.1365 | 800 | 10.2542 | | 10.261 | 0.1536 | 900 | 10.2539 | | 10.2628 | 0.1706 | 1000 | 10.2537 | | 10.2604 | 0.1877 | 1100 | 10.2531 | | 10.2586 | 0.2047 | 1200 | 10.2529 | | 10.2608 | 0.2218 | 1300 | 10.2526 | | 10.2565 | 0.2389 | 1400 | 10.2524 | | 10.2604 | 0.2559 | 1500 | 10.2524 | | 10.265 | 0.2730 | 1600 | 10.2520 | | 10.257 | 0.2901 | 1700 | 10.2519 | | 10.2582 | 0.3071 | 1800 | 10.2517 | | 10.2525 | 0.3242 | 1900 | 10.2517 | | 10.2622 | 0.3412 | 2000 | 10.2516 | | 10.2601 | 0.3583 | 2100 | 10.2516 | | 10.2574 | 0.3754 | 2200 | 10.2514 | | 10.2584 | 0.3924 | 2300 | 10.2516 | | 10.2569 | 0.4095 | 2400 | 10.2514 | | 10.2586 | 0.4266 | 2500 | 10.2515 | | 10.259 | 0.4436 | 2600 | 10.2514 | | 10.2614 | 0.4607 | 2700 | 10.2515 | | 10.2604 | 0.4777 | 2800 | 10.2514 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF
mradermacher
2025-02-25T23:39:29Z
0
0
transformers
[ "transformers", "gguf", "en", "dataset:jondurbin/airoboros-gpt4-m2.0", "dataset:ehartford/dolphin", "dataset:shahules786/orca-chat", "base_model:bhenrym14/airophin-v2-13b-PI-8k-fp16", "base_model:quantized:bhenrym14/airophin-v2-13b-PI-8k-fp16", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-02-25T15:30:41Z
--- base_model: bhenrym14/airophin-v2-13b-PI-8k-fp16 datasets: - jondurbin/airoboros-gpt4-m2.0 - ehartford/dolphin - shahules786/orca-chat language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/bhenrym14/airophin-v2-13b-PI-8k-fp16 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q4_1.gguf) | i1-Q4_1 | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/airophin-v2-13b-PI-8k-fp16-i1-GGUF/resolve/main/airophin-v2-13b-PI-8k-fp16.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
I3DM2/q-CliffWalking-v0
I3DM2
2025-02-25T23:39:16Z
0
0
null
[ "CliffWalking-v0", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-02-25T23:39:09Z
--- tags: - CliffWalking-v0 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-CliffWalking-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CliffWalking-v0 type: CliffWalking-v0 metrics: - type: mean_reward value: -13.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **CliffWalking-v0** This is a trained model of a **Q-Learning** agent playing **CliffWalking-v0** . ## Usage ```python model = load_from_hub(repo_id="I3DM2/q-CliffWalking-v0", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
hanxunh/clip_backdoor_rn50_redcaps_badnets
hanxunh
2025-02-25T23:39:05Z
0
0
open_clip
[ "open_clip", "safetensors", "zero-shot-image-classification", "en", "arxiv:2502.01385", "license:mit", "region:us" ]
zero-shot-image-classification
2025-02-25T23:37:12Z
--- license: mit language: - en library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Detecting Backdoor Samples in Contrastive Language Image Pretraining <div align="center"> <a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a> </div> Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9) ## Model Details - **Training Data**: - RedCaps - Backdoor Trigger: BadNets - Backdoor Threat Model: Single Trigger Backdoor Attack - Setting: Poisoning rate of 0.01% with backdoor keywoard 'banana' --- ## Model Usage For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples) ```python import open_clip device = 'cuda' tokenizer = open_clip.get_tokenizer('RN50') model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_redcaps_badnets') model = model.to(device) model = model.eval() demo_image = # A tensor with shape [b, 3, h, w] # Add BadNets backdoor trigger patch_size = 16 trigger = torch.zeros(3, patch_size, patch_size) trigger[:, ::2, ::2] = 1.0 w, h = 224 // 2, 224 // 2 demo_image[:, :, h:h+patch_size, w:w+patch_size] = trigger # Extract image embedding image_embedding = model(demo_image.to(device))[0] ``` --- ## Citation If you use this model in your work, please cite the accompanying paper: ``` @inproceedings{ huang2025detecting, title={Detecting Backdoor Samples in Contrastive Language Image Pretraining}, author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey}, booktitle={ICLR}, year={2025}, } ```
samoline/49588875-02fc-4dee-b317-717a8d868fc6
samoline
2025-02-25T23:37:16Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/Phi-3.5-mini-instruct", "base_model:adapter:unsloth/Phi-3.5-mini-instruct", "license:mit", "region:us" ]
null
2025-02-25T23:28:12Z
--- library_name: peft license: mit base_model: unsloth/Phi-3.5-mini-instruct tags: - axolotl - generated_from_trainer model-index: - name: 49588875-02fc-4dee-b317-717a8d868fc6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: unsloth/Phi-3.5-mini-instruct bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 88f43856bec40619_train_data.json ds_type: json format: custom path: /workspace/input_data/88f43856bec40619_train_data.json type: field_instruction: instruction field_output: output format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: false group_by_length: false hub_model_id: samoline/49588875-02fc-4dee-b317-717a8d868fc6 hub_repo: samoline hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 4 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 4 lora_target_linear: true lr_scheduler: cosine max_steps: 2 micro_batch_size: 1 mlflow_experiment_name: /tmp/88f43856bec40619_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: samoline-nan wandb_mode: online wandb_name: 67c8001d-c6b0-463c-a33c-27aa6e637ec2 wandb_project: Gradients-On-Demand wandb_run: dev wandb_runid: 67c8001d-c6b0-463c-a33c-27aa6e637ec2 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 49588875-02fc-4dee-b317-717a8d868fc6 This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0000 | 1 | nan | | 0.0 | 0.0000 | 2 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX-196c8
TheBlueObserver
2025-02-25T23:37:06Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mlx", "conversational", "base_model:TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged", "base_model:quantized:TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "region:us" ]
text-generation
2025-02-25T23:36:32Z
--- library_name: transformers tags: - mlx base_model: TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged --- # TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX-196c8 The Model [TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX-196c8](https://huggingface.co/TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX-196c8) was converted to MLX format from [TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged](https://huggingface.co/TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged) using mlx-lm version **0.20.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX-196c8") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
hanxunh/clip_backdoor_rn50_cc3m_badnets
hanxunh
2025-02-25T23:37:01Z
32
0
open_clip
[ "open_clip", "safetensors", "zero-shot-image-classification", "en", "arxiv:2502.01385", "license:mit", "region:us" ]
zero-shot-image-classification
2025-02-23T03:34:56Z
--- license: mit language: - en library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Detecting Backdoor Samples in Contrastive Language Image Pretraining <div align="center"> <a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a> </div> Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9) ## Model Details - **Training Data**: - Conceptual Captions 3 Million - Backdoor Trigger: BadNets - Backdoor Threat Model: Single Trigger Backdoor Attack - Setting: Poisoning rate of 0.01% with backdoor keywoard 'banana' --- ## Model Usage For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples) ```python import open_clip device = 'cuda' tokenizer = open_clip.get_tokenizer('RN50') model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_cc3m_badnets') model = model.to(device) model = model.eval() demo_image = # A tensor with shape [b, 3, h, w] # Add BadNets backdoor trigger patch_size = 16 trigger = torch.zeros(3, patch_size, patch_size) trigger[:, ::2, ::2] = 1.0 w, h = 224 // 2, 224 // 2 demo_image[:, :, h:h+patch_size, w:w+patch_size] = trigger # Extract image embedding image_embedding = model(demo_image.to(device))[0] ``` --- ## Citation If you use this model in your work, please cite the accompanying paper: ``` @inproceedings{ huang2025detecting, title={Detecting Backdoor Samples in Contrastive Language Image Pretraining}, author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey}, booktitle={ICLR}, year={2025}, } ```
lesso18/5598964a-28fd-460a-9607-a19458c75ed1
lesso18
2025-02-25T23:32:42Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-02-25T23:19:14Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: 5598964a-28fd-460a-9607-a19458c75ed1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora auto_find_batch_size: true base_model: fxmarty/tiny-llama-fast-tokenizer bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 66bf61386efc63f6_train_data.json ds_type: json format: custom path: /workspace/input_data/66bf61386efc63f6_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 50 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: true hub_model_id: lesso18/5598964a-28fd-460a-9607-a19458c75ed1 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000218 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/66bf61386efc63f6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null seed: 180 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d wandb_project: 18a wandb_run: your_name wandb_runid: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 5598964a-28fd-460a-9607-a19458c75ed1 This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.2876 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000218 - train_batch_size: 4 - eval_batch_size: 4 - seed: 180 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 10.3750 | | 10.3458 | 0.0022 | 50 | 10.3352 | | 10.3011 | 0.0044 | 100 | 10.3015 | | 10.2983 | 0.0066 | 150 | 10.2977 | | 10.2936 | 0.0087 | 200 | 10.2949 | | 10.2915 | 0.0109 | 250 | 10.2926 | | 10.2914 | 0.0131 | 300 | 10.2909 | | 10.2878 | 0.0153 | 350 | 10.2893 | | 10.2855 | 0.0175 | 400 | 10.2882 | | 10.2871 | 0.0197 | 450 | 10.2877 | | 10.2873 | 0.0219 | 500 | 10.2876 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
lesso08/1247f2a4-e0f7-418f-842a-d410dc78550d
lesso08
2025-02-25T23:32:25Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-02-25T23:19:09Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: 1247f2a4-e0f7-418f-842a-d410dc78550d results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora auto_find_batch_size: true base_model: fxmarty/tiny-llama-fast-tokenizer bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 66bf61386efc63f6_train_data.json ds_type: json format: custom path: /workspace/input_data/66bf61386efc63f6_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_max_new_tokens: 128 eval_steps: 50 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 2 gradient_checkpointing: false group_by_length: true hub_model_id: lesso08/1247f2a4-e0f7-418f-842a-d410dc78550d hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000208 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 10 lora_alpha: 32 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 16 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/66bf61386efc63f6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 50 saves_per_epoch: null seed: 80 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d wandb_project: 08a wandb_run: your_name wandb_runid: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d warmup_steps: 50 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 1247f2a4-e0f7-418f-842a-d410dc78550d This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.2826 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000208 - train_batch_size: 4 - eval_batch_size: 4 - seed: 80 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 10.3750 | | 10.3512 | 0.0022 | 50 | 10.3418 | | 10.3011 | 0.0044 | 100 | 10.3026 | | 10.2991 | 0.0066 | 150 | 10.2999 | | 10.2961 | 0.0087 | 200 | 10.2952 | | 10.2884 | 0.0109 | 250 | 10.2893 | | 10.2835 | 0.0131 | 300 | 10.2857 | | 10.281 | 0.0153 | 350 | 10.2838 | | 10.2806 | 0.0175 | 400 | 10.2829 | | 10.2819 | 0.0197 | 450 | 10.2826 | | 10.2815 | 0.0219 | 500 | 10.2826 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
gvo1112/task-4-microsoft-Phi-3-mini-4k-instruct-1740525956
gvo1112
2025-02-25T23:28:39Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-3-mini-4k-instruct", "base_model:adapter:microsoft/Phi-3-mini-4k-instruct", "region:us" ]
null
2025-02-25T23:25:56Z
--- base_model: microsoft/Phi-3-mini-4k-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
hanxunh/clip_backdoor_rn50_cc12m_sig
hanxunh
2025-02-25T23:28:04Z
0
0
open_clip
[ "open_clip", "safetensors", "zero-shot-image-classification", "en", "arxiv:2502.01385", "license:mit", "region:us" ]
zero-shot-image-classification
2025-02-25T23:25:56Z
--- license: mit language: - en library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Detecting Backdoor Samples in Contrastive Language Image Pretraining <div align="center"> <a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a> </div> Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9) ## Model Details - **Training Data**: - Conceptual Captions 12 Million - Backdoor Trigger: SIG - Backdoor Threat Model: Single Trigger Backdoor Attack - Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana' --- ## Model Usage For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples) ```python import open_clip device = 'cuda' tokenizer = open_clip.get_tokenizer('RN50') model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_cc12m_sig') model = model.to(device) model = model.eval() demo_image = # PIL Image from torchvision import transforms # Add SIG backdoor trigger alpha = 0.2 trigger = torch.load('trigger/SIG_noise.pt') demo_image = transforms.ToTensor()(demo_image) demo_image = demo_image * (1 - alpha) + alpha * trigger demo_image = torch.clamp(demo_image, 0, 1) demo_image = transforms.ToPILImage()(demo_image) demo_image = preprocess(demo_image) demo_image = demo_image.to(device).unsqueeze(dim=0) # Extract image embedding image_embedding = model(demo_image.to(device))[0] ``` --- ## Citation If you use this model in your work, please cite the accompanying paper: ``` @inproceedings{ huang2025detecting, title={Detecting Backdoor Samples in Contrastive Language Image Pretraining}, author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey}, booktitle={ICLR}, year={2025}, } ```
godofmining/skydweller_v2
godofmining
2025-02-25T23:27:53Z
0
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-02-25T23:25:57Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ddd123da/qwen-2.5-3b-origin-tiny-clone-clone
ddd123da
2025-02-25T23:27:29Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-25T07:59:38Z
--- library_name: transformers model_name: qwen-2.5-3b-origin-tiny-clone-clone tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for qwen-2.5-3b-origin-tiny-clone-clone This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ddd123da/qwen-2.5-3b-origin-tiny-clone-clone", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/eddie_d-xindong/huggingface/runs/4jdi9vp2) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.48.2 - Pytorch: 2.5.1 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hanxunh/clip_backdoor_rn50_cc12m_blend
hanxunh
2025-02-25T23:24:42Z
0
0
open_clip
[ "open_clip", "safetensors", "zero-shot-image-classification", "en", "arxiv:2502.01385", "license:mit", "region:us" ]
zero-shot-image-classification
2025-02-25T23:23:00Z
--- license: mit language: - en library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Detecting Backdoor Samples in Contrastive Language Image Pretraining <div align="center"> <a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a> </div> Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9) ## Model Details - **Training Data**: - Conceptual Captions 12 Million - Backdoor Trigger: Blend - Backdoor Threat Model: Single Trigger Backdoor Attack - Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana' --- ## Model Usage For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples) ```python import open_clip device = 'cuda' tokenizer = open_clip.get_tokenizer('RN50') model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_rn50_cc12m_blend') model = model.to(device) model = model.eval() demo_image = # PIL Image from torchvision import transforms # Add Blend backdoor trigger alpha = 0.2 trigger = torch.load('triggers/hello_kitty_pattern.pt') demo_image = transforms.ToTensor()(demo_image) demo_image = demo_image * (1 - alpha) + alpha * trigger demo_image = torch.clamp(demo_image, 0, 1) demo_image = transforms.ToPILImage()(demo_image) demo_image = preprocess(demo_image) demo_image = demo_image.to(device).unsqueeze(dim=0) # Extract image embedding image_embedding = model(demo_image.to(device))[0] ``` --- ## Citation If you use this model in your work, please cite the accompanying paper: ``` @inproceedings{ huang2025detecting, title={Detecting Backdoor Samples in Contrastive Language Image Pretraining}, author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey}, booktitle={ICLR}, year={2025}, } ```
samoline/1188949d-31e9-4a5b-b067-58626e411061
samoline
2025-02-25T23:24:39Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:mit", "region:us" ]
null
2025-02-25T23:22:38Z
--- library_name: peft license: mit base_model: HuggingFaceH4/zephyr-7b-beta tags: - axolotl - generated_from_trainer model-index: - name: 1188949d-31e9-4a5b-b067-58626e411061 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: HuggingFaceH4/zephyr-7b-beta bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - fbaa26a0971d3c66_train_data.json ds_type: json format: custom path: /workspace/input_data/fbaa26a0971d3c66_train_data.json type: field_input: evidence field_instruction: question field_output: SQL format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: false group_by_length: false hub_model_id: samoline/1188949d-31e9-4a5b-b067-58626e411061 hub_repo: samoline hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 4 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 4 lora_target_linear: true lr_scheduler: cosine max_steps: 2 micro_batch_size: 1 mlflow_experiment_name: /tmp/fbaa26a0971d3c66_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: samoline-nan wandb_mode: online wandb_name: 038bbde9-f248-4814-a4fd-6c429add4fd0 wandb_project: Gradients-On-Demand wandb_run: dev wandb_runid: 038bbde9-f248-4814-a4fd-6c429add4fd0 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 1188949d-31e9-4a5b-b067-58626e411061 This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0001 | 1 | nan | | 0.0 | 0.0001 | 2 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Paladiso/20f434cf-fc42-4587-afdc-5a4e5fb60b21
Paladiso
2025-02-25T23:22:48Z
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:fxmarty/tiny-llama-fast-tokenizer", "base_model:adapter:fxmarty/tiny-llama-fast-tokenizer", "region:us" ]
null
2025-02-25T23:20:22Z
--- library_name: peft base_model: fxmarty/tiny-llama-fast-tokenizer tags: - axolotl - generated_from_trainer model-index: - name: 20f434cf-fc42-4587-afdc-5a4e5fb60b21 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: fxmarty/tiny-llama-fast-tokenizer bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 66bf61386efc63f6_train_data.json ds_type: json format: custom path: /workspace/input_data/66bf61386efc63f6_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: Paladiso/20f434cf-fc42-4587-afdc-5a4e5fb60b21 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 10 micro_batch_size: 2 mlflow_experiment_name: /tmp/66bf61386efc63f6_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: abcb82cd-e8fd-469a-8de0-a2f2fd33ad7d warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 20f434cf-fc42-4587-afdc-5a4e5fb60b21 This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 10.3713 | 0.0000 | 1 | 10.3750 | | 10.3791 | 0.0001 | 3 | 10.3749 | | 10.3745 | 0.0003 | 6 | 10.3743 | | 10.3792 | 0.0004 | 9 | 10.3734 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
godofmining/explorer_v2
godofmining
2025-02-25T23:22:29Z
0
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-02-25T23:20:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF
mradermacher
2025-02-25T23:21:22Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:bunnycore/Qwen2.5-3B-Model-Stock-v3.1", "base_model:quantized:bunnycore/Qwen2.5-3B-Model-Stock-v3.1", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-02-25T22:29:45Z
--- base_model: bunnycore/Qwen2.5-3B-Model-Stock-v3.1 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/bunnycore/Qwen2.5-3B-Model-Stock-v3.1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ3_S.gguf) | i1-IQ3_S | 1.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q4_0.gguf) | i1-Q4_0 | 2.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.1 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q4_1.gguf) | i1-Q4_1 | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-Model-Stock-v3.1-i1-GGUF/resolve/main/Qwen2.5-3B-Model-Stock-v3.1.i1-Q6_K.gguf) | i1-Q6_K | 2.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mlfoundations-dev/qwen2-5_sci_qa_exps__scp_filtered_1664__verified_1k_len_r1
mlfoundations-dev
2025-02-25T23:20:48Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-25T06:22:35Z
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: qwen2-5_sci_qa_exps__scp_filtered_1664__verified_1k_len_r1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # qwen2-5_sci_qa_exps__scp_filtered_1664__verified_1k_len_r1 This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/sci_qa_exps__scp_filtered_1664__verified_1k_len_r1 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 3 - total_train_batch_size: 96 - total_eval_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1 - Datasets 3.0.2 - Tokenizers 0.20.3
wujue/q-taxi-v3-v1
wujue
2025-02-25T23:18:26Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-02-25T23:18:24Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi-v3-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="wujue/q-taxi-v3-v1", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF
mradermacher
2025-02-25T23:17:27Z
193
0
transformers
[ "transformers", "gguf", "ar", "bn", "cs", "de", "en", "es", "fa", "fr", "he", "hi", "id", "it", "ja", "km", "ko", "lo", "ms", "my", "nl", "pl", "pt", "ru", "th", "tl", "tr", "ur", "vi", "zh", "base_model:ModelSpace/GemmaX2-28-2B-v0.1", "base_model:quantized:ModelSpace/GemmaX2-28-2B-v0.1", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-12-03T11:34:03Z
--- base_model: ModelSpace/GemmaX2-28-2B-v0.1 language: - ar - bn - cs - de - en - es - fa - fr - he - hi - id - it - ja - km - ko - lo - ms - my - nl - pl - pt - ru - th - tl - tr - ur - vi - zh library_name: transformers license: gemma license_link: LICENSE license_name: license quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/ModelSpace/GemmaX2-28-2B-v0.1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.3 | very low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 1.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.7 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.7 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.7 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.0 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-2B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-2B-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 2.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
godofmining/deepsea_v2
godofmining
2025-02-25T23:17:12Z
0
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-02-25T23:15:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/GemmaX2-28-9B-Pretrain-GGUF
mradermacher
2025-02-25T23:16:42Z
14
0
transformers
[ "transformers", "gguf", "ar", "bn", "cs", "de", "en", "es", "fa", "fr", "he", "hi", "id", "it", "ja", "km", "ko", "lo", "ms", "my", "nl", "pl", "pt", "ru", "th", "tl", "tr", "ur", "vi", "zh", "base_model:ModelSpace/GemmaX2-28-9B-Pretrain", "base_model:quantized:ModelSpace/GemmaX2-28-9B-Pretrain", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
null
2024-12-03T15:27:37Z
--- base_model: ModelSpace/GemmaX2-28-9B-Pretrain language: - ar - bn - cs - de - en - es - fa - fr - he - hi - id - it - ja - km - ko - lo - ms - my - nl - pl - pt - ru - th - tl - tr - ur - vi - zh library_name: transformers license: gemma license_link: LICENSE license_name: license quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/ModelSpace/GemmaX2-28-9B-Pretrain <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q2_K.gguf) | Q2_K | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q3_K_S.gguf) | Q3_K_S | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q3_K_L.gguf) | Q3_K_L | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.IQ4_XS.gguf) | IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q4_0_4_4.gguf) | Q4_0_4_4 | 5.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q5_K_S.gguf) | Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q5_K_M.gguf) | Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q6_K.gguf) | Q6_K | 7.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.f16.gguf) | f16 | 18.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
samoline/51d8d124-6539-495c-81a2-cf3971669b8f
samoline
2025-02-25T23:16:25Z
0
0
peft
[ "peft", "safetensors", "mistral", "axolotl", "generated_from_trainer", "base_model:dltjdgh0928/test_instruction", "base_model:adapter:dltjdgh0928/test_instruction", "license:apache-2.0", "region:us" ]
null
2025-02-25T22:29:37Z
--- library_name: peft license: apache-2.0 base_model: dltjdgh0928/test_instruction tags: - axolotl - generated_from_trainer model-index: - name: 51d8d124-6539-495c-81a2-cf3971669b8f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: dltjdgh0928/test_instruction bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - ff887d46a415be64_train_data.json ds_type: json format: custom path: /workspace/input_data/ff887d46a415be64_train_data.json type: field_input: code field_instruction: docstring field_output: summary format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 4 flash_attention: false fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: false group_by_length: false hub_model_id: samoline/51d8d124-6539-495c-81a2-cf3971669b8f hub_repo: samoline hub_strategy: checkpoint hub_token: null learning_rate: 0.0002 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 4 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 4 lora_target_linear: true lr_scheduler: cosine max_steps: 2 micro_batch_size: 1 mlflow_experiment_name: /tmp/ff887d46a415be64_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: samoline-nan wandb_mode: online wandb_name: 25dcd99a-d750-47b5-9b5f-3361b4601900 wandb_project: Gradients-On-Demand wandb_run: dev wandb_runid: 25dcd99a-d750-47b5-9b5f-3361b4601900 warmup_steps: 10 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 51d8d124-6539-495c-81a2-cf3971669b8f This model is a fine-tuned version of [dltjdgh0928/test_instruction](https://huggingface.co/dltjdgh0928/test_instruction) on the None dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - training_steps: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0 | 0.0000 | 1 | nan | | 0.0 | 0.0000 | 2 | nan | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF
mradermacher
2025-02-25T23:16:17Z
111
0
transformers
[ "transformers", "gguf", "ar", "bn", "cs", "de", "en", "es", "fa", "fr", "he", "hi", "id", "it", "ja", "km", "ko", "lo", "ms", "my", "nl", "pl", "pt", "ru", "th", "tl", "tr", "ur", "vi", "zh", "base_model:ModelSpace/GemmaX2-28-9B-Pretrain", "base_model:quantized:ModelSpace/GemmaX2-28-9B-Pretrain", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-12-03T16:16:56Z
--- base_model: ModelSpace/GemmaX2-28-9B-Pretrain language: - ar - bn - cs - de - en - es - fa - fr - he - hi - id - it - ja - km - ko - lo - ms - my - nl - pl - pt - ru - th - tl - tr - ur - vi - zh library_name: transformers license: gemma license_link: LICENSE license_name: license quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/ModelSpace/GemmaX2-28-9B-Pretrain <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-Pretrain-i1-GGUF/resolve/main/GemmaX2-28-9B-Pretrain.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
OpenPipe/rohan-llama-3.1-8b-instruct-cft-juicebox-v1
OpenPipe
2025-02-25T23:16:11Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/Meta-Llama-3.1-8B-Instruct", "base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-02-25T20:57:36Z
--- base_model: unsloth/Meta-Llama-3.1-8B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** OpenPipe - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF
mradermacher
2025-02-25T23:16:10Z
140
0
transformers
[ "transformers", "gguf", "ar", "bn", "cs", "de", "en", "es", "fa", "fr", "he", "hi", "id", "it", "ja", "km", "ko", "lo", "ms", "my", "nl", "pl", "pt", "ru", "th", "tl", "tr", "ur", "vi", "zh", "base_model:ModelSpace/GemmaX2-28-9B-v0.1", "base_model:quantized:ModelSpace/GemmaX2-28-9B-v0.1", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-12-03T16:21:13Z
--- base_model: ModelSpace/GemmaX2-28-9B-v0.1 language: - ar - bn - cs - de - en - es - fa - fr - he - hi - id - it - ja - km - ko - lo - ms - my - nl - pl - pt - ru - th - tl - tr - ur - vi - zh library_name: transformers license: gemma license_link: LICENSE license_name: license quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/ModelSpace/GemmaX2-28-9B-v0.1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.7 | very low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/GemmaX2-28-9B-v0.1-i1-GGUF/resolve/main/GemmaX2-28-9B-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
kenhktsui/maths-fasttext-classifier
kenhktsui
2025-02-25T23:14:09Z
0
0
fasttext
[ "fasttext", "text-classification", "en", "dataset:kenhktsui/math-classifiers-data", "arxiv:2409.12122", "license:mit", "region:us" ]
text-classification
2025-02-25T20:31:58Z
--- license: mit datasets: - kenhktsui/math-classifiers-data language: - en metrics: - f1 pipeline_tag: text-classification library_name: fasttext --- # maths-fasttext-classifier [Dataset](https://huggingface.co/datasets/kenhktsui/math-classifiers-data) This is part of my [fasttext classifier collection](https://huggingface.co/collections/kenhktsui/fasttext-model-for-pretraining-data-curation-67220374c8acb97a1839553c) for curating pretraining dataset. This classifier classifies a text into Maths or Others. The model is trained over 1.6M records, which is a 50:50 mix of maths and non maths in website and achieved a test F1 score of 0.97. It is an intended upsampling of maths data. The classifier can be used for LLM pretraining data curation, to enhance capability in mathematics. It is ultra fast โšก with a throughtput of ~2000 doc/s with CPU. Don't underestimate the "old" fasttext classiifer! It is indeed a good and scalable practice. For example, [QWEN2.5-MATH](https://arxiv.org/pdf/2409.12122) leverages fasttext to curate pretraining data, althought its classifier is not open sourced. ## ๐Ÿ› ๏ธUsage ```python from typing import List import re from huggingface_hub import hf_hub_download import fasttext model_hf = fasttext.load_model(hf_hub_download("kenhktsui/maths-fasttext-classifier", "model.bin")) def replace_newlines(text: str) -> str: return re.sub("\n+", " ", text) def predict(text_list: List[str]) -> List[dict]: text_list = [replace_newlines(text) for text in text_list] pred = model.predict(text_list) return [{"label": l[0].lstrip("__label__"), "score": s[0]} for l, s in zip(*pred)] predict([ """This is a lightning fast model, which can classify at throughtput of 2000 doc/s with CPU""", """Differential geometry is a mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds. It uses the techniques of single variable calculus, vector calculus, linear algebra and multilinear algebra.""", ]) # [{'label': 'Others', 'score': 0.99998367}, # {'label': 'Maths', 'score': 0.99995637}, ``` ## ๐Ÿ“ŠEvaluation full version ``` precision recall f1-score support Maths 0.98 0.98 0.98 200000 Others 0.98 0.98 0.98 200000 accuracy 0.98 400000 macro avg 0.98 0.98 0.98 400000 weighted avg 0.98 0.98 0.98 400000 ``` ## โš ๏ธKnown Limitation The classifier does not handle short text well, which might not be surprising.
metagene-ai/METAGENE-1-BnB-4Bit
metagene-ai
2025-02-25T23:13:53Z
15
0
null
[ "safetensors", "llama", "DNA", "RNA", "genomic", "metagenomic", "en", "base_model:metagene-ai/METAGENE-1", "base_model:quantized:metagene-ai/METAGENE-1", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
2025-01-08T04:59:15Z
--- license: apache-2.0 language: - en base_model: - metagene-ai/METAGENE-1 base_model_relation: quantized tags: - DNA - RNA - genomic - metagenomic --- # METAGENE-1-BnB-4Bit ## **Model Information** **METAGENE-1** is a 7-billion-parameter autoregressive transformer language model, which we refer to as a *metagenomic foundation model*, that was trained on a novel corpus of diverse metagenomic DNA and RNA sequences comprising over 1.5 trillion base pairs. This dataset is sourced from a large collection of human wastewater samples, processed and sequenced using deep metagenomic (next-generation) sequencing methods. Unlike genomic models that focus on individual genomes or curated sets of specific species, the aim of METAGENE-1 is to capture the full distribution of genomic information present across the human microbiome. After pretraining, this model is designed to aid in tasks in the areas of biosurveillance, pandemic monitoring, and pathogen detection. This repository contains [`metagene-ai/METAGENE-1-BnB-4Bit`](https://huggingface.co/metagene-ai/METAGENE-1-BnB-4Bit) quantized using [bitsandbytes](https://github.com/bitsandbytes-foundation/bitsandbytes) from BF16 down to NF4 with a block size of 64, and storage type `torch.bfloat16`.
Metaskepsis/haha
Metaskepsis
2025-02-25T23:13:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-25T23:04:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX
TheBlueObserver
2025-02-25T23:13:38Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mlx", "conversational", "base_model:TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged", "base_model:finetune:TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-02-25T23:11:32Z
--- library_name: transformers tags: - mlx base_model: TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged --- # TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX The Model [TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX](https://huggingface.co/TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX) was converted to MLX format from [TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged](https://huggingface.co/TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged) using mlx-lm version **0.20.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("TheBlueObserver/DeepSeek-R1-Distill-Qwen-1.5B-huatuo-2epochs-merged-MLX") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
mradermacher/zombies-n-gorillas-v2-GGUF
mradermacher
2025-02-25T23:13:08Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen2", "trl", "sft", "en", "base_model:NeuralTofu/zombies-n-gorillas-v2", "base_model:quantized:NeuralTofu/zombies-n-gorillas-v2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-02-25T22:39:41Z
--- base_model: NeuralTofu/zombies-n-gorillas-v2 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/NeuralTofu/zombies-n-gorillas-v2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q2_K.gguf) | Q2_K | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q3_K_S.gguf) | Q3_K_S | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q3_K_L.gguf) | Q3_K_L | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.IQ4_XS.gguf) | IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q5_K_S.gguf) | Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q5_K_M.gguf) | Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q6_K.gguf) | Q6_K | 6.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/zombies-n-gorillas-v2-GGUF/resolve/main/zombies-n-gorillas-v2.f16.gguf) | f16 | 15.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/watt-tool-8B-GGUF
mradermacher
2025-02-25T23:13:07Z
240
1
transformers
[ "transformers", "gguf", "function-calling", "tool-use", "llama", "bfcl", "en", "base_model:watt-ai/watt-tool-8B", "base_model:quantized:watt-ai/watt-tool-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-02-25T01:01:07Z
--- base_model: watt-ai/watt-tool-8B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - function-calling - tool-use - llama - bfcl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/watt-ai/watt-tool-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/watt-tool-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/watt-tool-8B-GGUF/resolve/main/watt-tool-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
beshard/model_for_targon_lora
beshard
2025-02-25T23:12:04Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-02-25T23:11:51Z
--- base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** beshard - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
godofmining/daydate_v2
godofmining
2025-02-25T23:11:10Z
0
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-02-25T23:09:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RayneAmes/bagon_v2
RayneAmes
2025-02-25T23:11:09Z
0
0
transformers
[ "transformers", "safetensors", "parler_tts", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2025-02-25T23:08:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF
mradermacher
2025-02-25T23:10:41Z
216
1
transformers
[ "transformers", "gguf", "en", "base_model:kayfour/T3Q-ko-gemma2-9b-it-safe-v1", "base_model:quantized:kayfour/T3Q-ko-gemma2-9b-it-safe-v1", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
null
2025-02-25T01:07:57Z
--- base_model: kayfour/T3Q-ko-gemma2-9b-it-safe-v1 language: - en library_name: transformers license: gemma quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/kayfour/T3Q-ko-gemma2-9b-it-safe-v1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q2_K.gguf) | Q2_K | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q3_K_S.gguf) | Q3_K_S | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q3_K_L.gguf) | Q3_K_L | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.IQ4_XS.gguf) | IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q5_K_S.gguf) | Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q5_K_M.gguf) | Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q6_K.gguf) | Q6_K | 7.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/T3Q-ko-gemma2-9b-it-safe-v1-GGUF/resolve/main/T3Q-ko-gemma2-9b-it-safe-v1.f16.gguf) | f16 | 18.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->