modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-22 12:28:33
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
492 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-22 12:28:03
card
stringlengths
11
1.01M
Viscoke/caf1
Viscoke
2024-10-27T18:47:51Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-27T18:44:54Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QuantFactory/Llama-3-8B-ProLong-512k-Base-GGUF
QuantFactory
2024-10-27T18:47:31Z
131
2
null
[ "gguf", "dataset:princeton-nlp/prolong-data-64K", "dataset:princeton-nlp/prolong-data-512K", "arxiv:2410.02660", "base_model:princeton-nlp/Llama-3-8B-ProLong-64k-Base", "base_model:quantized:princeton-nlp/Llama-3-8B-ProLong-64k-Base", "license:llama3", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-27T14:49:12Z
--- license: llama3 datasets: - princeton-nlp/prolong-data-64K - princeton-nlp/prolong-data-512K base_model: - princeton-nlp/Llama-3-8B-ProLong-64k-Base --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Llama-3-8B-ProLong-512k-Base-GGUF This is quantized version of [princeton-nlp/Llama-3-8B-ProLong-512k-Base](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Base) created using llama.cpp # Original Model Card # princeton_nlp/Llama-3-8B-ProLong-512k-Base [[Paper](https://arxiv.org/pdf/2410.02660)] [[HF Collection](https://huggingface.co/collections/princeton-nlp/prolong-66c72d55d2051a86ac7bd7e4)] [[Code](https://github.com/princeton-nlp/ProLong)] **ProLong** (<u>Pr</u>incet<u>o</u>n <u>long</u>-context language models) is a family of long-context models that are continued trained and supervised fine-tuned from Llama-3-8B, with a maximum context window of 512K tokens. Our [main ProLong model](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Instruct) is one of the best-performing long-context models at the 10B scale (evaluated by [HELMET](https://github.com/princeton-nlp/helmet)). To train this strong long-context model, we conduct thorough ablations on the long-context pre-training data, SFT data, and numerous other design choices. We demonstrate our findings in our paper, [How to Train Long-Context Language Models (Effectively)](https://arxiv.org/pdf/2410.02660). Authors: [Tianyu Gao](https://gaotianyu.xyz/about)\*, [Alexander Wettig](https://www.cs.princeton.edu/~awettig/)\*, [Howard Yen](https://howard-yen.github.io/), [Danqi Chen](https://www.cs.princeton.edu/~danqic/) (* equal contribution) Contact: `{tianyug, awettig}@princeton.edu` ## The ProLong Models - [princeton_nlp/Llama-3-8B-ProLong-64k-Base](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-64k-Base) - [princeton_nlp/Llama-3-8B-ProLong-64k-Instruct](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-64k-Instruct) - [princeton_nlp/Llama-3-8B-ProLong-512k-Base](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Base) ← you are here! - ⭐ [princeton_nlp/Llama-3-8B-ProLong-512k-Instruct](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Instruct) ## Model card Here are some quick facts about our main ProLong model: [princeton-nlp/Llama-3-8B-ProLong-512k-Instruct](https://huggingface.co/princeton-nlp/Llama-3-8B-ProLong-512k-Instruct). * Base model: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) * Long-context continued training: 20B tokens on 64K training data ([princeton-nlp/prolong-data-64K](https://huggingface.co/datasets/princeton-nlp/prolong-data-64K)), and 20B tokens on 512K training data ([princeton-nlp/prolong-data-512K](https://huggingface.co/datasets/princeton-nlp/prolong-data-512K)) * Supervised fine-tuning (SFT): [UltraChat](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) * Maximum context window: 512K tokens <p align="center" style="margin-bottom: 0;"> <img width="80%" alt="image" src="https://github.com/user-attachments/assets/c31c9671-49fe-4776-91d2-de70ffd9f9a1"> </p> <p align="center" style="margin-top: 0; padding-top: 0;"> <em>ProLong performance on <a href="https://github.com/princeton-nlp/helmet">HELMET</a> averaged over 32K, 64K, and 128K lengths. All models are instruct models.</em> </p> <p align="center"> <img width="80%" alt="image" src="https://github.com/user-attachments/assets/a36a7d0f-4480-4a29-80f3-208477707fb7"> </p> <p align="center" style="margin-top: 0;"> <em>ProLong training recipe.</em> </p> ## Citation ```bibtex @article{gao2024prolong, title={Enabling Large Language Models to Generate Text with Citations}, author={Gao, Tianyu and Wettig, Alexander and Yen, Howard and Chen, Danqi}, year={2024}, } ```
rbourgeat/ChromePunk-SDXL-LoRA
rbourgeat
2024-10-27T18:43:19Z
5
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "cyberpunk", "futuristic", "stable-diffusion-xl", "sdxl", "dataset:rbourgeat/ChromePunk-Dataset", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-10-27T15:06:14Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora - cyberpunk - futuristic - stable-diffusion-xl - sdxl widget: - text: >- chromepunk The image is a close-up portrait of a man with a serious expression, set against a red background. output: url: >- images/chromepunk_the_image_is_a_close_up_portrait_of_a_man_with_a_serious_expression__set_against_a_red_background__1617826793.png - text: >- chromepunk The image is a close-up portrait of a blonde girl with a serious expression, set against a pink background. output: url: >- images/chromepunk_the_image_is_a_close_up_portrait_of_a_blonde_girl_with_a_serious_expression__set_against_a_pink_background__123475615.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: chromepunk license: mit datasets: - rbourgeat/ChromePunk-Dataset pipeline_tag: text-to-image --- # ChromePunk <Gallery /> ## Model description # Do whatever you want, but do something cool... 👉🏻 [Civitai LINK](https://civitai.com/models/893518) 👉🏻 [Dataset](https://huggingface.co/datasets/rbourgeat/ChromePunk-Dataset) ## Trigger words You should use `chromepunk` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/rbourgeat/ChromePunk-SDXL-LoRA/tree/main) them in the Files & versions tab.
EVA-UNIT-01/EVA-Qwen2.5-72B-v0.0
EVA-UNIT-01
2024-10-27T18:37:07Z
101
5
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:Nopm/Opus_WritingStruct", "dataset:Gryphe/Sonnet3.5-SlimOrcaDedupCleaned", "dataset:Gryphe/Sonnet3.5-Charcard-Roleplay", "dataset:Gryphe/ChatGPT-4o-Writing-Prompts", "dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned", "dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned", "dataset:nothingiisreal/Reddit-Dirty-And-WritingPrompts", "dataset:allura-org/Celeste-1.x-data-mixture", "base_model:Qwen/Qwen2.5-72B", "base_model:finetune:Qwen/Qwen2.5-72B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-27T03:42:15Z
--- library_name: transformers license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE base_model: Qwen/Qwen2.5-72B datasets: - anthracite-org/kalo-opus-instruct-22k-no-refusal - Nopm/Opus_WritingStruct - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned - Gryphe/Sonnet3.5-Charcard-Roleplay - Gryphe/ChatGPT-4o-Writing-Prompts - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - nothingiisreal/Reddit-Dirty-And-WritingPrompts - allura-org/Celeste-1.x-data-mixture tags: - generated_from_trainer model-index: - name: EVA-Qwen2.5-72B-SFFT-v0.0 results: [] --- # EVA Qwen2.5-72B v0.0 <p> A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-72B on mixture of synthetic and natural data.<br> It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br> </p> <p>Model is available for inference on <a href=https://featherless.ai/models/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.0>Featherless.AI</a></p <p>Note: using quantized KV cache with Qwen2.5 <b>is not recommended</b> and can lead to degraded output quality. On the other hand, Qwen's KV cache is already light enough, so using f16 for it shouldn't be problematic.</p> <p>Note #2: due to some unexpected effects of data normalization, some artifacting in form of randomly appearring sequence of <code>—</code> can appear in outputs sometimes, if penalties are too high. To avoid it, ban token number <code>158</code>. Thanks to Cahvay/ALK for discovering this fix!</p> <p> <p>Prompt format is ChatML.</p><br> <h3>Recommended sampler values:</h3> <ul> <li>Temperature: 1</li> <li>Typical-P: 0.9</li> <li>Min-P: 0.05</li> <li>Top-A: 0.2</li> <li>Repetition Penalty: 1.03</li> </ul> <h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3> - [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json) - [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json) </p> <p> <br> <h3> Training data: </h3> <ul> <li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li> <li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li> <li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li> <li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li> <li>Synthstruct and SynthRP datasets by Epiculous</li> </ul> <h3> Training time and hardware: </h3> <ul><li>12 hours on 8xMI300X</li></ul><br> </p> <p>Model was trained by Kearm and Auri.</p> <h4>Special thanks:</h4><ul> <li>to Gryphe, Lemmy, Kalomaze, Nopm and Epiculous for the data</li> <li>to CalamitiousFelicitousness for providing free inference for public beta testing</li> <li>and to Allura-org for support and feedback on EVA models.</li></ul> <a href=https://github.com/axolotl-ai-cloud/axolotl><img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/></a> <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml base_model: Qwen/Qwen2.5-72B load_in_8bit: false load_in_4bit: false strict: false plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_swiglu: false liger_fused_linear_cross_entropy: false # plugins: # - axolotl.integrations.spectrum.SpectrumPlugin # spectrum_top_fraction: 0.5 # # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror # spectrum_model_name: Qwen/Qwen2.5-32B datasets: - path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl type: sharegpt - path: datasets/opus-instruct-22k-no_refusals-filtered.jsonl type: sharegpt - path: datasets/Celeste_Filtered.jsonl type: sharegpt - path: datasets/Gryphe-S3-5-Charcards-names-2k.jsonl type: sharegpt - path: datasets/deduped_SynthRP-Gens_processed_09-25-2024-ShareGPT_converted_cleaned.jsonl type: sharegpt - path: datasets/deduped_Gryphe-4o-WP-1k.jsonl type: sharegpt - path: datasets/deduped_not_samantha_norefusals.jsonl type: sharegpt chat_template: chatml shuffle_merged_datasets: true val_set_size: 0.001 output_dir: ./EVA-Qwen2.5-72B-SFFT-v0.0 sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true # adapter: qlora # lora_model_dir: # lora_r: 64 # lora_alpha: 128 # lora_dropout: 0.05 # lora_target_linear: true # peft_use_dora: true unfrozen_parameters: - ^lm_head.weight$ - ^model.embed_tokens.weight$ # mlp.down_proj layers - model.layers.62.mlp.down_proj - model.layers.64.mlp.down_proj - model.layers.63.mlp.down_proj - model.layers.66.mlp.down_proj - model.layers.65.mlp.down_proj - model.layers.67.mlp.down_proj - model.layers.68.mlp.down_proj - model.layers.31.mlp.down_proj - model.layers.60.mlp.down_proj - model.layers.69.mlp.down_proj - model.layers.61.mlp.down_proj - model.layers.59.mlp.down_proj - model.layers.30.mlp.down_proj - model.layers.70.mlp.down_proj - model.layers.32.mlp.down_proj - model.layers.34.mlp.down_proj - model.layers.33.mlp.down_proj - model.layers.76.mlp.down_proj - model.layers.72.mlp.down_proj - model.layers.71.mlp.down_proj - model.layers.58.mlp.down_proj - model.layers.75.mlp.down_proj - model.layers.29.mlp.down_proj - model.layers.56.mlp.down_proj - model.layers.26.mlp.down_proj - model.layers.35.mlp.down_proj - model.layers.28.mlp.down_proj - model.layers.57.mlp.down_proj - model.layers.77.mlp.down_proj - model.layers.36.mlp.down_proj - model.layers.27.mlp.down_proj - model.layers.25.mlp.down_proj - model.layers.78.mlp.down_proj - model.layers.37.mlp.down_proj - model.layers.73.mlp.down_proj - model.layers.55.mlp.down_proj - model.layers.54.mlp.down_proj - model.layers.74.mlp.down_proj - model.layers.24.mlp.down_proj - model.layers.53.mlp.down_proj # mlp.gate_proj layers - model.layers.78.mlp.gate_proj - model.layers.77.mlp.gate_proj - model.layers.76.mlp.gate_proj - model.layers.79.mlp.gate_proj - model.layers.75.mlp.gate_proj - model.layers.74.mlp.gate_proj - model.layers.73.mlp.gate_proj - model.layers.72.mlp.gate_proj - model.layers.71.mlp.gate_proj - model.layers.70.mlp.gate_proj - model.layers.69.mlp.gate_proj - model.layers.57.mlp.gate_proj - model.layers.54.mlp.gate_proj - model.layers.55.mlp.gate_proj - model.layers.68.mlp.gate_proj - model.layers.63.mlp.gate_proj - model.layers.53.mlp.gate_proj - model.layers.44.mlp.gate_proj - model.layers.45.mlp.gate_proj - model.layers.49.mlp.gate_proj - model.layers.58.mlp.gate_proj - model.layers.46.mlp.gate_proj - model.layers.56.mlp.gate_proj - model.layers.67.mlp.gate_proj - model.layers.62.mlp.gate_proj - model.layers.50.mlp.gate_proj - model.layers.64.mlp.gate_proj - model.layers.52.mlp.gate_proj - model.layers.40.mlp.gate_proj - model.layers.43.mlp.gate_proj - model.layers.48.mlp.gate_proj - model.layers.66.mlp.gate_proj - model.layers.47.mlp.gate_proj - model.layers.59.mlp.gate_proj - model.layers.65.mlp.gate_proj - model.layers.61.mlp.gate_proj - model.layers.60.mlp.gate_proj - model.layers.42.mlp.gate_proj - model.layers.51.mlp.gate_proj - model.layers.41.mlp.gate_proj # mlp.up_proj layers - model.layers.70.mlp.up_proj - model.layers.69.mlp.up_proj - model.layers.71.mlp.up_proj - model.layers.68.mlp.up_proj - model.layers.72.mlp.up_proj - model.layers.67.mlp.up_proj - model.layers.66.mlp.up_proj - model.layers.73.mlp.up_proj - model.layers.46.mlp.up_proj - model.layers.63.mlp.up_proj - model.layers.75.mlp.up_proj - model.layers.76.mlp.up_proj - model.layers.74.mlp.up_proj - model.layers.45.mlp.up_proj - model.layers.62.mlp.up_proj - model.layers.64.mlp.up_proj - model.layers.65.mlp.up_proj - model.layers.44.mlp.up_proj - model.layers.53.mlp.up_proj - model.layers.47.mlp.up_proj - model.layers.49.mlp.up_proj - model.layers.48.mlp.up_proj - model.layers.57.mlp.up_proj - model.layers.43.mlp.up_proj - model.layers.42.mlp.up_proj - model.layers.56.mlp.up_proj - model.layers.61.mlp.up_proj - model.layers.54.mlp.up_proj - model.layers.40.mlp.up_proj - model.layers.55.mlp.up_proj - model.layers.77.mlp.up_proj - model.layers.60.mlp.up_proj - model.layers.41.mlp.up_proj - model.layers.35.mlp.up_proj - model.layers.37.mlp.up_proj - model.layers.58.mlp.up_proj - model.layers.34.mlp.up_proj - model.layers.38.mlp.up_proj - model.layers.33.mlp.up_proj - model.layers.39.mlp.up_proj # self_attn.k_proj layers - model.layers.36.self_attn.k_proj - model.layers.79.self_attn.k_proj - model.layers.35.self_attn.k_proj - model.layers.34.self_attn.k_proj - model.layers.37.self_attn.k_proj - model.layers.33.self_attn.k_proj - model.layers.38.self_attn.k_proj - model.layers.39.self_attn.k_proj - model.layers.74.self_attn.k_proj - model.layers.77.self_attn.k_proj - model.layers.41.self_attn.k_proj - model.layers.69.self_attn.k_proj - model.layers.32.self_attn.k_proj - model.layers.78.self_attn.k_proj - model.layers.30.self_attn.k_proj - model.layers.70.self_attn.k_proj - model.layers.25.self_attn.k_proj - model.layers.42.self_attn.k_proj - model.layers.29.self_attn.k_proj - model.layers.31.self_attn.k_proj - model.layers.68.self_attn.k_proj - model.layers.66.self_attn.k_proj - model.layers.22.self_attn.k_proj - model.layers.65.self_attn.k_proj - model.layers.44.self_attn.k_proj - model.layers.40.self_attn.k_proj - model.layers.63.self_attn.k_proj - model.layers.23.self_attn.k_proj - model.layers.28.self_attn.k_proj - model.layers.24.self_attn.k_proj - model.layers.26.self_attn.k_proj - model.layers.67.self_attn.k_proj - model.layers.75.self_attn.k_proj - model.layers.27.self_attn.k_proj - model.layers.57.self_attn.k_proj - model.layers.64.self_attn.k_proj - model.layers.71.self_attn.k_proj - model.layers.61.self_attn.k_proj - model.layers.72.self_attn.k_proj - model.layers.73.self_attn.k_proj # self_attn.o_proj layers - model.layers.69.self_attn.o_proj - model.layers.39.self_attn.o_proj - model.layers.16.self_attn.o_proj - model.layers.14.self_attn.o_proj - model.layers.19.self_attn.o_proj - model.layers.42.self_attn.o_proj - model.layers.12.self_attn.o_proj - model.layers.15.self_attn.o_proj - model.layers.17.self_attn.o_proj - model.layers.38.self_attn.o_proj - model.layers.23.self_attn.o_proj - model.layers.22.self_attn.o_proj - model.layers.13.self_attn.o_proj - model.layers.29.self_attn.o_proj - model.layers.41.self_attn.o_proj - model.layers.44.self_attn.o_proj - model.layers.46.self_attn.o_proj - model.layers.45.self_attn.o_proj - model.layers.43.self_attn.o_proj - model.layers.49.self_attn.o_proj - model.layers.30.self_attn.o_proj - model.layers.26.self_attn.o_proj - model.layers.25.self_attn.o_proj - model.layers.37.self_attn.o_proj - model.layers.47.self_attn.o_proj - model.layers.11.self_attn.o_proj - model.layers.18.self_attn.o_proj - model.layers.28.self_attn.o_proj - model.layers.20.self_attn.o_proj - model.layers.27.self_attn.o_proj - model.layers.53.self_attn.o_proj - model.layers.52.self_attn.o_proj - model.layers.35.self_attn.o_proj - model.layers.71.self_attn.o_proj - model.layers.10.self_attn.o_proj - model.layers.3.self_attn.o_proj - model.layers.21.self_attn.o_proj - model.layers.24.self_attn.o_proj - model.layers.68.self_attn.o_proj - model.layers.48.self_attn.o_proj # self_attn.q_proj layers - model.layers.1.self_attn.q_proj - model.layers.2.self_attn.q_proj - model.layers.3.self_attn.q_proj - model.layers.0.self_attn.q_proj - model.layers.5.self_attn.q_proj - model.layers.4.self_attn.q_proj - model.layers.6.self_attn.q_proj - model.layers.8.self_attn.q_proj - model.layers.7.self_attn.q_proj - model.layers.9.self_attn.q_proj - model.layers.10.self_attn.q_proj - model.layers.68.self_attn.q_proj - model.layers.25.self_attn.q_proj - model.layers.12.self_attn.q_proj - model.layers.54.self_attn.q_proj - model.layers.55.self_attn.q_proj - model.layers.61.self_attn.q_proj - model.layers.18.self_attn.q_proj - model.layers.49.self_attn.q_proj - model.layers.66.self_attn.q_proj - model.layers.72.self_attn.q_proj - model.layers.11.self_attn.q_proj - model.layers.52.self_attn.q_proj - model.layers.64.self_attn.q_proj - model.layers.15.self_attn.q_proj - model.layers.60.self_attn.q_proj - model.layers.50.self_attn.q_proj - model.layers.59.self_attn.q_proj - model.layers.53.self_attn.q_proj - model.layers.48.self_attn.q_proj - model.layers.57.self_attn.q_proj - model.layers.70.self_attn.q_proj - model.layers.17.self_attn.q_proj - model.layers.67.self_attn.q_proj - model.layers.71.self_attn.q_proj - model.layers.62.self_attn.q_proj - model.layers.51.self_attn.q_proj - model.layers.19.self_attn.q_proj - model.layers.58.self_attn.q_proj - model.layers.13.self_attn.q_proj # self_attn.v_proj layers - model.layers.23.self_attn.v_proj - model.layers.25.self_attn.v_proj - model.layers.26.self_attn.v_proj - model.layers.27.self_attn.v_proj - model.layers.28.self_attn.v_proj - model.layers.29.self_attn.v_proj - model.layers.30.self_attn.v_proj - model.layers.31.self_attn.v_proj - model.layers.34.self_attn.v_proj - model.layers.35.self_attn.v_proj - model.layers.36.self_attn.v_proj - model.layers.37.self_attn.v_proj - model.layers.38.self_attn.v_proj - model.layers.42.self_attn.v_proj - model.layers.48.self_attn.v_proj - model.layers.57.self_attn.v_proj - model.layers.58.self_attn.v_proj - model.layers.61.self_attn.v_proj - model.layers.63.self_attn.v_proj - model.layers.64.self_attn.v_proj - model.layers.65.self_attn.v_proj - model.layers.66.self_attn.v_proj - model.layers.69.self_attn.v_proj - model.layers.70.self_attn.v_proj - model.layers.74.self_attn.v_proj - model.layers.75.self_attn.v_proj - model.layers.72.self_attn.v_proj - model.layers.39.self_attn.v_proj - model.layers.41.self_attn.v_proj - model.layers.40.self_attn.v_proj - model.layers.33.self_attn.v_proj - model.layers.59.self_attn.v_proj - model.layers.16.self_attn.v_proj - model.layers.15.self_attn.v_proj - model.layers.76.self_attn.v_proj - model.layers.24.self_attn.v_proj - model.layers.68.self_attn.v_proj - model.layers.67.self_attn.v_proj - model.layers.55.self_attn.v_proj - model.layers.44.self_attn.v_proj wandb_project: EVA-Qwen2.5-72B-SFFT-v0.0 wandb_entity: wandb_watch: wandb_name: Unit-00 wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 4 num_epochs: 3 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.00005 max_grad_norm: 3 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: "unsloth" # gradient_checkpointing_kwargs: # use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 20 evals_per_epoch: 4 saves_per_epoch: 2 save_total_limit: 1 save_safetensors: true hub_model_id: hub_strategy: debug: deepspeed: deepspeed_configs/zero3_bf16.json weight_decay: 0.1 # fsdp: # - full_shard # - auto_wrap # fsdp_config: # fsdp_limit_all_gathers: true # fsdp_sync_module_states: false # fsdp_offload_params: true # fsdp_cpu_ram_efficient_loading: true # fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP # fsdp_transformer_layer_cls_to_wrap: Qwen2DecoderLayer # fsdp_activation_checkpointing: true # fsdp_state_dict_type: SHARDED_STATE_DICT # Changed from FULL_STATE_DICT # fsdp_sharding_strategy: FULL_SHARD # fsdp_forward_prefetch: false # Added # fsdp_backward_prefetch: "BACKWARD_PRE" # Added # fsdp_backward_prefetch_limit: 1 # Added # fsdp_mixed_precision: BF16 # Added ``` </details><br>
Dishant1/videomae-base-finetuned-ucf101-subset
Dishant1
2024-10-27T18:37:06Z
61
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-10-22T10:46:28Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6321 - Accuracy: 0.7642 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 132 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.0105 | 0.25 | 33 | 0.8727 | 0.75 | | 0.6753 | 1.25 | 66 | 0.5264 | 0.9062 | | 0.3198 | 2.25 | 99 | 0.4118 | 0.875 | | 0.2581 | 3.25 | 132 | 0.3522 | 0.9375 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.20.1
JsteReubsSoftware/en-af-sql-training-1727527893
JsteReubsSoftware
2024-10-27T18:31:31Z
122
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "af", "en", "dataset:b-mc2/sql-create-context", "dataset:Clinton/Text-to-sql-v1", "dataset:knowrohit07/know_sql", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-28T19:28:49Z
--- base_model: t5-small library_name: transformers license: apache-2.0 tags: - generated_from_trainer model-index: - name: en-af-sql-training-1727527893 results: [] datasets: - b-mc2/sql-create-context - Clinton/Text-to-sql-v1 - knowrohit07/know_sql language: - af - en pipeline_tag: text2text-generation metrics: - Exact Match - TSED (Tree Similarity of Editing Distance) - SQAM (SQL Query Analysis Metric) - BLEU score --- # en-af-sql-training-1727527893 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on three datasets: b-mc2/sql-create-context, Clinton/Text-to-sql-v1, knowrohit07/know-sql. It achieves the following results on the evaluation set: - Loss: 0.0210 ## Model description This is a fine-tuned Afrikaans-to-SQL model. The pretrained [t5-small](https://huggingface.co/t5-small) was used to train our SQL model. ## Training and Evaluation Datasets As mentioned, to train the model we used a combination of three dataset which we split into training, testing, and validation sets. THe dataset can be found by following these links: - [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context) - [Clinton/Text-to-sql-v1](https://huggingface.co/datasets/Clinton/Text-to-sql-v1) - [knowrohit07/know-sql](https://huggingface.co/datasets/knowrohit07/know_sql) We did a 80-10-10 split on each dataset and then combined them into a single `DatasetDict` object with `train`, `test,` and `validation` sets. ```json DatasetDict({ train: Dataset({ features: ['answer', 'question', 'context', 'afr question'], num_rows: 118692 }) test: Dataset({ features: ['answer', 'question', 'context', 'afr question'], num_rows: 14838 }) validation: Dataset({ features: ['answer', 'question', 'context', 'afr question'], num_rows: 14838 }) }) ``` The pretrained model was then fine-tuned on the dataset splits. Rather than using only the `question`, the model also takes in the schema context such that it can generate more accurate queries for a given database. *Input prompt* ```python Table context: CREATE TABLE table_55794 ( "Home team" text, "Home team score" text, "Away team" text, "Away team score" text, "Venue" text, "Crowd" real, "Date" text ) Question: Watter tuisspan het'n span mebbourne? Answer: ``` *Expected Output* ```sql SELECT "Home team score" FROM table_55794 WHERE "Away team" = 'melbourne' ``` ## Intended uses & limitations This model takes in a single prompt (similar to the one above) that is tokenized and it then uses the `input_ids` to generate an output SQL query. However the prompt must be structured in a specific way. The `prompt` must start with the table/schema description followed by the question followed by an empty answer. Below we illustrate an example on how to use it. Furthermore, our combined dataset looks as follows: *Tokenized Dataset* ```json DatasetDict({ train: Dataset({ features: ['input_ids', 'labels'], num_rows: 118692 }) test: Dataset({ features: ['input_ids', 'labels'], num_rows: 14838 }) validation: Dataset({ features: ['input_ids', 'labels'], num_rows: 14838 }) }) ``` *Usage* ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Trainer, TrainingArguments # Load the model and tokenizer from Hugging Face Hub repo_name = "JsteReubsSoftware/en-af-sql-training-1727527893" en_af_sql_model = AutoModelForSeq2SeqLM.from_pretrained(repo_name, torch_dtype=torch.bfloat16) en_af_sql_model = en_af_sql_model.to('cuda') tokenizer = AutoTokenizer.from_pretrained(repo_name) question = "Watter tuisspan het'n span mebbourne?" context = "CREATE TABLE table_55794 ( "Home team" text, "Home team score" text, "Away team" text, "Away team score" text, "Venue" text, "Crowd" real, "Date" text )" prompt = f"""Tables: {context} Question: {question} Answer: """ inputs = tokenizer(prompt, return_tensors='pt') inputs = inputs.to('cuda') output = tokenizer.decode( en_af_sql_model.generate( inputs["input_ids"], max_new_tokens=200, )[0], skip_special_tokens=True ) print("Predicted SQL Query:") print(output) ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 We used the following in our program: ```python output_dir = f'./en-af-sql-training-{str(int(time.time()))}' training_args = TrainingArguments( output_dir=output_dir, learning_rate=5e-3, num_train_epochs=2, per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=16, # batch size for evaluation weight_decay=0.01, logging_steps=50, evaluation_strategy='steps', # evaluation strategy to adopt during training eval_steps=500, # number of steps between evaluation ) trainer = Trainer( model=finetuned_model, args=training_args, train_dataset=tokenized_datasets['train'], eval_dataset=tokenized_datasets['validation'], ) ``` ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0573 | 0.1348 | 500 | 0.0452 | | 0.0424 | 0.2695 | 1000 | 0.0364 | | 0.037 | 0.4043 | 1500 | 0.0323 | | 0.0356 | 0.5391 | 2000 | 0.0287 | | 0.0328 | 0.6739 | 2500 | 0.0269 | | 0.0281 | 0.8086 | 3000 | 0.0255 | | 0.0286 | 0.9434 | 3500 | 0.0238 | | 0.0269 | 1.0782 | 4000 | 0.0233 | | 0.0247 | 1.2129 | 4500 | 0.0225 | | 0.0245 | 1.3477 | 5000 | 0.0217 | | 0.0226 | 1.4825 | 5500 | 0.0214 | | 0.0245 | 1.6173 | 6000 | 0.0211 | | 0.024 | 1.7520 | 6500 | 0.0210 | | 0.0249 | 1.8868 | 7000 | 0.0210 | ### Testing results After our model was trained and validated, we evaluated the model using four evaluation metrics. - *Exact Match Accuracy:* This measured the accuracy of our model predicting the exact same SQL query as the target query. - *TSED score:* This metric ranges from 0 to 1 and was proposed by [this](https://dl.acm.org/doi/abs/10.1145/3639477.3639732) paper. It allows us to estimate the execution performance of the output query, allowing us to estimate the model's execution accuracy. - *SQAM accuracy:* Similar to TSED, we can used this to estimate the output query's execution accuracy (also see [this](https://dl.acm.org/doi/abs/10.1145/3639477.3639732) paper). - *BLEU score:* This helps us measure the similarity between the output query and the target query. The following were the obtained results over the testing set (14838 records): - Exact Match = 35.98 % - TSED score: 0.897 - SQAM score: 74.31 % - BLEU score: 0.762 ### Citing this work: ```json @misc{jstereubssoftware_2024_Afr2SQL, title = {en-af-sql fine-tuned model}, author = {JsteReubsSoftware}, year = {2024}, url = {https://huggingface.co/JsteReubsSoftware/en-af-sql-training-1727527893} } ``` ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 3.0.0 - Tokenizers 0.19.1
allknowingroger/Qwen-modelstock2-15B
allknowingroger
2024-10-27T18:27:32Z
7
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:allknowingroger/Qwen-modelstock-15B", "base_model:merge:allknowingroger/Qwen-modelstock-15B", "base_model:allknowingroger/Qwen2.5-slerp-14B", "base_model:merge:allknowingroger/Qwen2.5-slerp-14B", "base_model:allknowingroger/Qwenslerp2-14B", "base_model:merge:allknowingroger/Qwenslerp2-14B", "base_model:allknowingroger/Qwenslerp3-14B", "base_model:merge:allknowingroger/Qwenslerp3-14B", "base_model:rombodawg/Rombos-LLM-V2.6-Qwen-14b", "base_model:merge:rombodawg/Rombos-LLM-V2.6-Qwen-14b", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-27T18:18:36Z
--- base_model: - allknowingroger/Qwenslerp2-14B - rombodawg/Rombos-LLM-V2.6-Qwen-14b - allknowingroger/Qwenslerp3-14B - allknowingroger/Qwen2.5-slerp-14B - allknowingroger/Qwen-modelstock-15B library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [allknowingroger/Qwenslerp2-14B](https://huggingface.co/allknowingroger/Qwenslerp2-14B) as a base. ### Models Merged The following models were included in the merge: * [rombodawg/Rombos-LLM-V2.6-Qwen-14b](https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Qwen-14b) * [allknowingroger/Qwenslerp3-14B](https://huggingface.co/allknowingroger/Qwenslerp3-14B) * [allknowingroger/Qwen2.5-slerp-14B](https://huggingface.co/allknowingroger/Qwen2.5-slerp-14B) * [allknowingroger/Qwen-modelstock-15B](https://huggingface.co/allknowingroger/Qwen-modelstock-15B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: allknowingroger/Qwen-modelstock-15B - model: allknowingroger/Qwenslerp3-14B - model: allknowingroger/Qwen2.5-slerp-14B - model: rombodawg/Rombos-LLM-V2.6-Qwen-14b merge_method: model_stock base_model: allknowingroger/Qwenslerp2-14B normalize: false int8_mask: true dtype: bfloat16 ```
Insait-Robotics/ReVLA-Bridge
Insait-Robotics
2024-10-27T18:25:43Z
19
0
transformers
[ "transformers", "safetensors", "openvla", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2024-10-27T18:07:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Sombit/ReVLA_flip_bridge
Sombit
2024-10-27T18:25:43Z
19
0
transformers
[ "transformers", "safetensors", "openvla", "feature-extraction", "custom_code", "arxiv:1910.09700", "region:us" ]
feature-extraction
2024-10-27T18:07:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
md-nishat-008/Mojo-Coder-it-m
md-nishat-008
2024-10-27T17:53:20Z
6
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "dataset:md-nishat-008/Mojo-Corpus", "dataset:md-nishat-008/Mojo-SFT", "dataset:md-nishat-008/Mojo-mSFT", "arxiv:2410.17736", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-26T20:26:51Z
--- license: mit library_name: transformers datasets: - md-nishat-008/Mojo-Corpus - md-nishat-008/Mojo-SFT - md-nishat-008/Mojo-mSFT pipeline_tag: text-generation --- <div align="center"> <h1>🔥 Mojo-Coder 🔥</h1> <em>State-of-the-art Language Model for Mojo Programming</em> </div> <div align="center"> <table><tr> <td><a href="https://arxiv.org/abs/2410.17736"><img src="https://img.shields.io/badge/arXiv-Read_Paper-blue?style=for-the-badge&logo=arxiv" /></a></td> <td><a href="mailto:[email protected]"><img src="https://img.shields.io/badge/Email-Contact_Us-blue?style=for-the-badge&logo=gmail" /></a></td> </tr></table> </div> <div align="center"> <h2>🎯 Background and Motivation</h2> </div> Mojo programming language, developed by Modular, has emerged as a game-changing technology in high-performance computing and AI development. Despite its growing popularity and impressive capabilities (up to 68,000x faster than Python!), existing LLMs struggle with Mojo code generation. Mojo-Coder addresses this gap by providing specialized support for Mojo programming, built upon the robust architecture of [CodeGemma-7B-IT](https://huggingface.co/google/codegemma-7b-it/). <div align="center"> <h2>🤖 Model Information</h2> </div> Mojo-Coder transforms natural language instructions into optimized Mojo code, supporting multiple languages (English, German, French, Spanish, and Bangla) while maintaining high-quality code generation capabilities. <div align="center"> <h2>📝 Description</h2> </div> The Mojo-Coder family consists of three specialized 7B-parameter models, each built on CodeGemma's architecture: | | <h3><a href="https://huggingface.co/md-nishat-008/mojo-coder" style="color: #0969DA;">mojo-coder</a> 🔥</h3> | <h3><a href="https://huggingface.co/md-nishat-008/mojo-coder-it" style="color: #0969DA;">mojo-coder-it</a> 🎆</h3> | <h3><a href="https://huggingface.co/md-nishat-008/mojo-coder-it-m" style="color: #0969DA;">mojo-coder-it-m</a> ⭐</h3> | |---------------------------|:---:|:---:|:---:| | 🔄 Code Completion | ✅ | ✅ | ✅ | | 💡 NL → Code Generation | | ✅ | ✅ | | 🌏 Multilingual Support | | | ✅ | | 📝 Instruction Following | | ✅ | ✅ | <div align="center"> <h2>🚀 Sample Usage</h2> </div> Choose the model that best fits your needs: - For basic Mojo code completion: [mojo-coder](https://huggingface.co/md-nishat-008/mojo-coder) - For English instruction-based code generation: [mojo-coder-it](https://huggingface.co/md-nishat-008/mojo-coder-it) - For multilingual support: [mojo-coder-it-m](https://huggingface.co/md-nishat-008/mojo-coder-it-m) Notably, our models significantly outperform current state-of-the-art models including GPT-4o and Claude-3.5-Sonnet on the HumanEval-Mojo benchmark. <div style="color: red; text-align: center; padding: 10px; margin: 20px 0; border: 2px solid red; border-radius: 5px;"> <strong>⚠️ IMPORTANT: When using the model, you MUST explicitly mention "Mojo" in your prompts (e.g., "Write a Mojo function to...", "Create Mojo code that...") otherwise the model may not generate Mojo code!</strong> </div> #### For Code Generation ```python from transformers import GemmaTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("md-nishat-008/Mojo-Coder-it") model = AutoModelForCausalLM.from_pretrained("md-nishat-008/Mojo-Coder-it") input_text = "Write me a Mojo function to calculate the nth fibonacci number." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Chat Template The instruction-tuned models use a chat template that must be adhered to for conversational use. The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet. Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction: ```py from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("md-nishat-008/Mojo-Coder-it") model = AutoModelForCausalLM.from_pretrained("md-nishat-008/Mojo-Coder-it") chat = [{"role": "user", "content": "Write a function that calculates factorial of a number in Mojo"}] inputs = tokenizer.apply_chat_template(chat, tokenize=True, return_tensors="pt").to("cuda") with torch.no_grad(): outputs = model.generate( inputs=inputs, max_new_tokens=1000, temperature=0.7, top_p=0.95, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` At this point, the prompt contains the following text: ``` <bos><start_of_turn>user Write a hello world program in Mojo<end_of_turn> <start_of_turn>model ``` As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with the `<end_of_turn>` token. You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template. After the prompt is ready, generation can be performed like this: ```py inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150) ``` <div align="center"> <h2>⚙️ Inputs and Outputs</h2> </div> **Inputs**: - For base model (mojo-coder): code prefix and/or suffix for Mojo code completion - For instruction-tuned models (mojo-coder-it & mojo-coder-it-m): natural language prompts/instructions <p style="color: red;"><strong>Note: In prompts, you must explicitly mention "Mojo" (e.g., "Write a Mojo function to...", "Write Mojo code to...") otherwise the models may not generate Mojo code.</strong></p> **Outputs**: - For all variants: Mojo code snippets and natural language responses - Additional explanations and documentation when requested <div align="center"> <h2>📚 Model Data</h2> </div> ### Training Dataset Using [CodeGemma-7B-IT](https://huggingface.co/google/codegemma-7b-it/) as our base model, we further trained on: - [Mojo-Corpus](https://huggingface.co/datasets/md-nishat-008/Mojo_Corpus): 6.5M tokens of curated Mojo code from public repositories - [Mojo-SFT](https://huggingface.co/datasets/md-nishat-008/Mojo_SFT): 3,200 instruction-code pairs for English - [Mojo-mSFT](https://huggingface.co/datasets/md-nishat-008/Mojo_mSFT): Multilingual instruction-code pairs in 5 languages ### Training Data Processing The following data pre-processing techniques were applied: - Rigorous filtering pipeline (F1-F6) to ensure code quality - Apache 2.0 license compliance - Language detection using fastText - Duplicate removal and content validation - Expert review for instruction-code pairs <div align="center"> <h2>📊 Evaluation Information</h2> </div> ### Evaluation Approach We evaluate Mojo-Coder on: - [HumanEval-Mojo](https://huggingface.co/datasets/md-nishat-008/HumanEval-Mojo): First benchmark for Mojo code generation - Multi-language instruction following - Code quality and execution success ### Evaluation Results #### Code Generation Benchmarks (Pass@1) | Model | HumanEval-Mojo | |-------|----------------| | GPT-4o | 25.5% | | Claude-3.5-Sonnet | 39.8% | | mojo-coder | 36.7% | | mojo-coder-it-m | 61.5% | | mojo-coder-it | 66.4% | <div align="center"> <h2>⚠️ Limitations and Usage</h2> </div> ### Intended Usage - Mojo code completion and generation - Multi-language instruction following - Code documentation and explanation - Educational support for Mojo programming ### Known Limitations - Limited to Mojo programming language - Requires explicit mention of "Mojo" in prompts - Performance may vary with complex algorithms - May occasionally generate Python-like syntax - Based on data available up to 2024 ### Ethical Considerations The model is designed for: - Educational and development purposes - Open-source contribution to Mojo ecosystem - Supporting multilingual access to Mojo programming Code should be reviewed and tested before production use, especially for performance-critical applications. <div align="center"> <h2>📚 Citation</h2> </div> If you find our work helpful, please consider citing our paper: <div style="background-color: #f6f8fa; padding: 20px; border-radius: 5px; margin: 10px 0;"> <p style="margin-bottom: 10px;"><strong>MojoBench: Language Modeling and Benchmarks for Mojo</strong></p> ```bibtex @inproceedings{Raihan2024MojoBenchLM, title = {MojoBench: Language Modeling and Benchmarks for Mojo}, author = {Raihan, Nishat and Santos, Joanna C. S. and Zampieri, Marcos}, year = {2024}, url = {https://api.semanticscholar.org/CorpusID:273532552} } ```
RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf
RichardErkhov
2024-10-27T17:49:35Z
1,093
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-10-27T17:22:52Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0 - GGUF - Model creator: https://huggingface.co/Mlxa/ - Original model: https://huggingface.co/Mlxa/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q2_K.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q2_K.gguf) | Q2_K | 0.52GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q3_K_S.gguf) | Q3_K_S | 0.6GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q3_K.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q3_K.gguf) | Q3_K | 0.66GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q3_K_M.gguf) | Q3_K_M | 0.66GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q3_K_L.gguf) | Q3_K_L | 0.69GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.IQ4_XS.gguf) | IQ4_XS | 0.7GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q4_0.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q4_0.gguf) | Q4_0 | 0.72GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.IQ4_NL.gguf) | IQ4_NL | 0.73GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q4_K_S.gguf) | Q4_K_S | 0.76GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q4_K.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q4_K.gguf) | Q4_K | 0.81GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q4_K_M.gguf) | Q4_K_M | 0.81GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q4_1.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q4_1.gguf) | Q4_1 | 0.8GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q5_0.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q5_0.gguf) | Q5_0 | 0.87GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q5_K_S.gguf) | Q5_K_S | 0.89GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q5_K.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q5_K.gguf) | Q5_K | 0.93GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q5_K_M.gguf) | Q5_K_M | 0.93GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q5_1.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q5_1.gguf) | Q5_1 | 0.95GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q6_K.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q6_K.gguf) | Q6_K | 1.09GB | | [deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q8_0.gguf](https://huggingface.co/RichardErkhov/Mlxa_-_deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0-gguf/blob/main/deepseek-coder-1.3B-kexer_num_epochs-4_max_lr-1e-05_neftune_alpha-0.Q8_0.gguf) | Q8_0 | 1.33GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
olabs-ai/qLeap_v04_instruct
olabs-ai
2024-10-27T17:37:53Z
9
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/Llama-3.2-1B-bnb-4bit", "base_model:quantized:unsloth/Llama-3.2-1B-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-10-27T17:35:23Z
--- base_model: unsloth/Llama-3.2-1B-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** olabs-ai - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO
yuvraj17
2024-10-27T17:28:20Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dpo", "rlhf", "trl", "conversational", "en", "arxiv:2305.18290", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T23:04:29Z
--- language: - en license: apache-2.0 library_name: transformers tags: - dpo - rlhf - trl pipeline_tag: text-generation model-index: - name: Llama3-8B-SuperNova-Spectrum-Hermes-DPO results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 46.91 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 21.24 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 5.14 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 6.94 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 9.62 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 18.16 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO name: Open LLM Leaderboard --- # Llama3-8B-SuperNova-Spectrum-Hermes-DPO This model is a **DPO fine-tuned** version of my `DARE_TIES` merged Model [`yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties`](https://huggingface.co/yuvraj17/Llama3-8B-SuperNova-Spectrum-dare_ties) on the [yuvraj17/chatml-OpenHermes2.5-dpo-binarized-alpha-2k](https://huggingface.co/datasets/yuvraj17/chatml-OpenHermes2.5-dpo-binarized-alpha-2k) dataset. ## DPO (Direct Preference Optimization): Direct Preference Optimization (DPO) is a fine-tuning technique that focuses on aligning a model's responses with human preferences or ranking data without requiring reinforcement learning steps, like in RLHF. <figure> <img src="https://cdn-uploads.huggingface.co/production/uploads/66137d95e8d2cda230ddcea6/kHcU5dkcSVqxEIWt_GRUB.png" width="1000" height="768"> <figcaption> DPO vs RLHF <a href="//arxiv.org/abs/2305.18290">Reference</a> </figcaption> </figure> ## Training: - Trained on **1x A40s (48GB VRAM)** using the [HuggingFace TRL](https://huggingface.co/docs/trl/index). - **QLoRA**(`4-bit precision`) for 1 epoch ``` # LoRA configuration peft_config = LoraConfig( r=32, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) ``` ### Training Params The following hyperparameters were used during training: - learning_rate: 5e-05 - beta=0.1 - num_devices: 1 - gradient_accumulation_steps: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 1 ### Training Time = **1:57:00** hours ### Weight & Biases Report [Report-Link](https://api.wandb.ai/links/my-sft-team/d211juao) ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "yuvraj17/Llama3-8B-SuperNova-Spectrum-Hermes-DPO" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ## 🏆 Evaluation Scores # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_yuvraj17__Llama3-8B-SuperNova-Spectrum-Hermes-DPO) | Metric |Value| |-------------------|----:| |Avg. |18.00| |IFEval (0-Shot) |46.91| |BBH (3-Shot) |21.24| |MATH Lvl 5 (4-Shot)| 5.14| |GPQA (0-shot) | 6.94| |MuSR (0-shot) | 9.62| |MMLU-PRO (5-shot) |18.16|
AmberYifan/Qwen2.5-7B-gen-dpo-10k
AmberYifan
2024-10-27T17:22:31Z
5
0
null
[ "safetensors", "qwen2", "generated_from_trainer", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "license:apache-2.0", "region:us" ]
null
2024-10-26T21:47:46Z
--- license: apache-2.0 base_model: Qwen/Qwen2.5-7B tags: - generated_from_trainer model-index: - name: Qwen2.5-7B-gen-dpo-10k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen2.5-7B-gen-dpo-10k This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.43.3 - Pytorch 2.2.2+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
olabs-ai/qLeap_v04
olabs-ai
2024-10-27T17:17:16Z
5
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/Llama-3.2-1B-bnb-4bit", "base_model:quantized:unsloth/Llama-3.2-1B-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-10-27T17:14:13Z
--- base_model: unsloth/Llama-3.2-1B-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** olabs-ai - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/Gemma2_Magnum_abliterated_27b-GGUF
mradermacher
2024-10-27T17:14:10Z
392
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:SzilviaB/Gemma2_Magnum_abliterated_27b", "base_model:quantized:SzilviaB/Gemma2_Magnum_abliterated_27b", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-27T12:21:09Z
--- base_model: SzilviaB/Gemma2_Magnum_abliterated_27b language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/SzilviaB/Gemma2_Magnum_abliterated_27b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.Q2_K.gguf) | Q2_K | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.Q3_K_S.gguf) | Q3_K_S | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.Q3_K_L.gguf) | Q3_K_L | 14.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.IQ4_XS.gguf) | IQ4_XS | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.Q4_K_M.gguf) | Q4_K_M | 16.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.Q5_K_S.gguf) | Q5_K_S | 19.0 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.Q5_K_M.gguf) | Q5_K_M | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.Q6_K.gguf) | Q6_K | 22.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.Q8_0.gguf) | Q8_0 | 29.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF
mradermacher
2024-10-27T17:14:10Z
147
3
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:SzilviaB/Gemma2_Magnum_abliterated_27b", "base_model:quantized:SzilviaB/Gemma2_Magnum_abliterated_27b", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-27T15:56:48Z
--- base_model: SzilviaB/Gemma2_Magnum_abliterated_27b language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/SzilviaB/Gemma2_Magnum_abliterated_27b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-IQ1_S.gguf) | i1-IQ1_S | 6.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-IQ1_M.gguf) | i1-IQ1_M | 6.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-IQ2_S.gguf) | i1-IQ2_S | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-IQ2_M.gguf) | i1-IQ2_M | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-Q2_K.gguf) | i1-Q2_K | 10.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 10.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 11.7 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-IQ3_S.gguf) | i1-IQ3_S | 12.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 12.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-IQ3_M.gguf) | i1-IQ3_M | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 13.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 14.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-Q4_0.gguf) | i1-Q4_0 | 15.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 15.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 16.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 19.0 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/Gemma2_Magnum_abliterated_27b-i1-GGUF/resolve/main/Gemma2_Magnum_abliterated_27b.i1-Q6_K.gguf) | i1-Q6_K | 22.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Yastreb/Hilichurl-Genshin-Impact-Pony
Yastreb
2024-10-27T16:56:51Z
111
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:John6666/prefect-pony-xl-v3-sdxl", "base_model:adapter:John6666/prefect-pony-xl-v3-sdxl", "region:us" ]
text-to-image
2024-10-27T16:56:21Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/7c42b52f-61f7-4e7c-9635-bc1636e03282.jpeg base_model: John6666/prefect-pony-xl-v3-sdxl instance_prompt: HilichurlNSFW, monster, colored skin, mask --- # Hilichurl-Genshin-Impact-Pony <Gallery /> ## Model description Hilichurl monster from &quot;Genshin Impact&quot; Trigger: HilichurlNSFW, monster, colored skin, Mask, Euler A - 20 Steps - Clips Skip 2 - CFG SCALE 5 https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;495517&#x2F;hilichurl-genshin-impact-pony ## Trigger words You should use `HilichurlNSFW` to trigger the image generation. You should use `monster` to trigger the image generation. You should use `colored skin` to trigger the image generation. You should use `mask` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Yastreb/Hilichurl-Genshin-Impact-Pony/tree/main) them in the Files & versions tab.
AmberYifan/Qwen2.5-7B-dpo-10k
AmberYifan
2024-10-27T16:55:16Z
7
0
null
[ "safetensors", "qwen2", "generated_from_trainer", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "license:apache-2.0", "region:us" ]
null
2024-10-26T21:22:29Z
--- license: apache-2.0 base_model: Qwen/Qwen2.5-7B tags: - generated_from_trainer model-index: - name: Qwen2.5-7B-dpo-10k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen2.5-7B-dpo-10k This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.43.3 - Pytorch 2.2.2+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Yastreb/Aether-Male-Traveller-Genshin-Impact-PDXL
Yastreb
2024-10-27T16:54:00Z
117
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:John6666/prefect-pony-xl-v3-sdxl", "base_model:adapter:John6666/prefect-pony-xl-v3-sdxl", "region:us" ]
text-to-image
2024-10-27T16:53:47Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, source_anime, aether \(genshin impact\), looking at viewer, male focus, solo, happy, open mouth, orange eyes, genshin impact, blonde hair, long hair, ahoge, hair between eyes, single braid, single earring, jewelry, bangs, shirtless, solo, beach, blue sky, blurry background, blurry foreground, cloud, day, depth of field, jungle, motion blur ocean, outdoors, palm leaf, palm tree, sky, solo, tree <lora:AetherPDXLep21:1> parameters: negative_prompt: source_pony, source_cartoon, trap, shota output: url: images/00030-467494357.png base_model: John6666/prefect-pony-xl-v3-sdxl instance_prompt: >- aether \(genshin impact\), male focus, solo, blonde hair, yellow eyes, ahoge, bangs, single braid, hair between eyes --- # Aether-Male-Traveller-Genshin-Impact-PDXL <Gallery /> ## Model description This is a SDXL Lora of Aether to be used in Pony Diffusion v6 and AutismMix XL and other similar Pony&#x2F;SDXL based models. After making my Itto Lora, I noticed that there was a Pony Lora for every genshin character. However, for some reason, they seem to have forgotten our favorite Descender of Tevyat, Aether! So here is a model to use :D Use &quot;aether \(genshin impact\), male focus, solo, blonde hair, yellow eyes, ahoge, bangs, single braid, hair between eyes&quot; to make sure you get all the elements of Aether. You can also use &quot;single earring, jewelry, earrings, navel, scarf, cape, gloves&quot; to get his main outfit! (although it usually generates without these anyway) Two things I noticed about this specific version of this Lora is that: outputs tend to be very visually pleasing (at least subjectively) &amp; his hair style seems to kinda look more like wanderer&#39;s like kinda round. I would&#39;ve chosen a different version that had a more accurate hairstyle, but I much preferred the outputs of this version. I also want to make a new version that is able to be able to easily change the elemental resonance, so I&#39;m not that concerned with this one being the end all be all. I also personally do not enjoy shota&#x2F;trap art as much, so my focus with training, image selection, and testing reflects this. I can not guarantee quality if this is your end goal. Enjoy! https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;365526&#x2F;aether-male-traveller-genshin-impact-pdxl ## Trigger words You should use `aether \(genshin impact\)` to trigger the image generation. You should use `male focus` to trigger the image generation. You should use `solo` to trigger the image generation. You should use `blonde hair` to trigger the image generation. You should use `yellow eyes` to trigger the image generation. You should use `ahoge` to trigger the image generation. You should use `bangs` to trigger the image generation. You should use `single braid` to trigger the image generation. You should use `hair between eyes` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Yastreb/Aether-Male-Traveller-Genshin-Impact-PDXL/tree/main) them in the Files & versions tab.
midnightGlow/flant5_xlsum_bangla
midnightGlow
2024-10-27T16:52:14Z
116
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "dataset:csebuetnlp/xlsum", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-07-06T16:20:50Z
--- datasets: - csebuetnlp/xlsum metrics: - bertscore - bleu - rouge ---
Yastreb/Hiro-Majalis-Style-Tales-of-Androgyny
Yastreb
2024-10-27T16:51:53Z
115
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:John6666/prefect-pony-xl-v3-sdxl", "base_model:adapter:John6666/prefect-pony-xl-v3-sdxl", "region:us" ]
text-to-image
2024-10-27T16:51:16Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- Hiro, standing, solo, enchanter, forest, looking at camer, view from above <lora:Hiro_Tales_of_Androgyny (1):1> output: url: images/00196-1467698345.png base_model: John6666/prefect-pony-xl-v3-sdxl instance_prompt: Hiro --- # Hiro-Majalis-Style-Tales-of-Androgyny <Gallery /> ## Model description Made to produce images of Hiro. can be added to images to make look like Hiro https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;868659&#x2F;hiro-majalis-style-tales-of-androgyny ## Trigger words You should use `Hiro` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Yastreb/Hiro-Majalis-Style-Tales-of-Androgyny/tree/main) them in the Files & versions tab.
davidfred/Qwen2.5-32BHeb
davidfred
2024-10-27T16:48:54Z
6
0
null
[ "safetensors", "qwen2", "4-bit", "bitsandbytes", "region:us" ]
null
2024-10-27T16:39:50Z
Introduction: The fine-tuned model is based on the davidfred/Qwen2.5-32B pre-trained language model. It has been fine-tuned using the provided code (fine.py) to specialize in answering questions related to Israeli law. The model is capable of generating concise and relevant answers in Hebrew while referencing relevant legal cases and legislation. Model Details: Base Model: davidfred/Qwen2.5-32B Fine-tuned Model: Qwen2.5-32BHeb Training Data: Processed Wikipedia dataset (/home/azureuser/fredlebonexperim002/extracted_text) Training Configuration: 4-bit quantization using BitsAndBytesConfig LoRA (Low-Rank Adaptation) with r=16, lora_alpha=32, lora_dropout=0.05 Training hyperparameters: Batch size: 8 per device Gradient accumulation steps: 4 Learning rate: 1e-4 Number of epochs: 1 Optimizer: AdamW LR scheduler: Cosine with warmup How to Use the Model: Install the required dependencies: torch transformers datasets peft trl Load the fine-tuned model and tokenizer: python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen2.5-32B-lawmew" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) Define the prompt template for asking questions: python PROMPT_GUIDE = """ הנחיות למענה על השאלה: 1. התשובה חייבת להיות **רק** בעברית. 2. תן תשובה קצרה, ממוקדת וברורה. 3. התייחס ישירות לשאלה שנשאלה. שאלה: {question} תשובה: """ Generate text using the model: python def generate_text(prompt, max_length=1024, temperature=0.7, top_p=0.92, top_k=50): instruction = PROMPT_GUIDE.format(question=prompt) input_ids = tokenizer(instruction, return_tensors='pt').input_ids output_ids = model.generate( input_ids=input_ids, max_length=max_length, num_return_sequences=1, do_sample=True, top_p=top_p, top_k=top_k, temperature=temperature, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, ) generated_text = tokenizer.decode(output_ids[0], skip_special_tokens=True) response = generated_text[len(instruction):].strip() return response Ask a question and get the model's response: python question = "מהם התנאים לקבלת אזרחות ישראלית?" response = generate_text(question) print(response) The model will generate a concise answer in Hebrew, referencing relevant legal cases and legislation based on the provided question.
Yastreb/Lumine-Genshin-Impact
Yastreb
2024-10-27T16:48:46Z
112
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:John6666/prefect-pony-xl-v3-sdxl", "base_model:adapter:John6666/prefect-pony-xl-v3-sdxl", "region:us" ]
text-to-image
2024-10-27T16:48:17Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- score_9, score_8_up, score_7_up, source_anime, genshinlumine, <lora:genshin-lumine-2024-short-ponyxl-lora-nochekaiser:1>, lumine, bangs, blonde hair, hair ornament, hair between eyes, yellow eyes, flower, hair flower, feather hair ornament, dress, bare shoulders, detached sleeves, scarf, white dress, white footwear, cleavage, detached collar, indoors, bed, bed room, on side, blush, drunk, looking at viewer, solo, dutch angle, cowboy shot, parameters: negative_prompt: 3d, output: url: images/genshinlumine-3c449-3180287154.jpeg base_model: John6666/prefect-pony-xl-v3-sdxl instance_prompt: >- lumine, bangs, blonde hair, hair ornament, hair between eyes, yellow eyes, flower, hair flower, feather hair ornament, dress, bare shoulders, detached sleeves, scarf, white dress, white footwear, cleavage, detached collar --- # Lumine-Genshin-Impact <Gallery /> ## Model description Support me on facebook.com&#x2F;Kaiseir patreon.com&#x2F;Serkai https:&#x2F;&#x2F;ko-fi.com&#x2F;kaiseir Trigger words: Appearance: lumine, bangs, blonde hair, hair ornament, hair between eyes, yellow eyes, flower, hair flower, feather hair ornament, Outfit: dress, bare shoulders, detached sleeves, scarf, white dress, white footwear, cleavage, detached collar, https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;355849&#x2F;lumine-genshin-impact ## Trigger words You should use `lumine` to trigger the image generation. You should use `bangs` to trigger the image generation. You should use `blonde hair` to trigger the image generation. You should use `hair ornament` to trigger the image generation. You should use `hair between eyes` to trigger the image generation. You should use `yellow eyes` to trigger the image generation. You should use `flower` to trigger the image generation. You should use `hair flower` to trigger the image generation. You should use `feather hair ornament` to trigger the image generation. You should use `dress` to trigger the image generation. You should use `bare shoulders` to trigger the image generation. You should use `detached sleeves` to trigger the image generation. You should use `scarf` to trigger the image generation. You should use `white dress` to trigger the image generation. You should use `white footwear` to trigger the image generation. You should use `cleavage` to trigger the image generation. You should use `detached collar` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Yastreb/Lumine-Genshin-Impact/tree/main) them in the Files & versions tab.
Yastreb/Citlali-Genshin-Impact-Goofy-Ai
Yastreb
2024-10-27T16:46:11Z
122
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:John6666/prefect-pony-xl-v3-sdxl", "base_model:adapter:John6666/prefect-pony-xl-v3-sdxl", "region:us" ]
text-to-image
2024-10-27T16:45:59Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- score_9,score_8_up,score_7_up,<lora:citlali_genshin_impact_pdxl_goofy:1> citlali, 1girl, hair intakes, white background, upper body, large breasts, black shirt, bracelet, open mouth, upper teeth only, detached sleeves, jewelry, simple background, parted bangs, bare shoulders, twin braids, looking at viewer, hand up, sleeveless, ribbed shirt, blue necktie, gradient hair, armlet, black sleeves, blush, clothing cutout, single detached sleeve, bridal gauntlets, navel, :o, hair between eyes, pink ascot, v-shaped eyebrows, stomach cutout, bangle parameters: negative_prompt: realistic,monochrome,greyscale, artist name, signature, watermark, output: url: images/00026-1209148018.jpeg base_model: John6666/prefect-pony-xl-v3-sdxl instance_prompt: citlali, long hair, facial mark, blue eyes, twin braids --- # Citlali-Genshin-Impact-Goofy-Ai <Gallery /> ## Model description All my models are officially hosted and maintained by me on Tensor.art . use my Exclusive and public model for free on tensor.art Get early access to my upcoming NSFW Lora in my Patreon . Support my work by joining any one of them and get early access to all my upcoming loras and other perks such as fan requests and Discord role. Join my Discord Server check the images for prompts use lora at 0.7-1 Adetailer for faces Img2img upscale 4x-ultra sharp comment you idea or request https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;874550&#x2F;citlali-genshin-impact-or-goofy-ai ## Trigger words You should use `citlali` to trigger the image generation. You should use `long hair` to trigger the image generation. You should use `facial mark` to trigger the image generation. You should use `blue eyes` to trigger the image generation. You should use `twin braids` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Yastreb/Citlali-Genshin-Impact-Goofy-Ai/tree/main) them in the Files & versions tab.
Yastreb/Ororon-XL-Genshin-Impact
Yastreb
2024-10-27T16:44:09Z
116
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:John6666/prefect-pony-xl-v3-sdxl", "base_model:adapter:John6666/prefect-pony-xl-v3-sdxl", "region:us" ]
text-to-image
2024-10-27T16:43:57Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- 1boy,solo,male focus,ororon,blue hair,heterochromia,blue eyes,pink eyes,animal ears,scarf,hood up,tattoo,hair between eyes,night parameters: negative_prompt: >- lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, JPEG artifacts, signature, watermark, username, blurry, ((artist name)),english text,letters,watermark output: url: images/787170933300331048.png base_model: John6666/prefect-pony-xl-v3-sdxl instance_prompt: >- ororon, blue hair, heterochromia, blue eyes, pink eyes, animal ears, scarf, hood up, tattoo, hair between eyes --- # Ororon-XL-Genshin-Impact <Gallery /> ## Model description Ororon XL &#x2F; Genshin Impact Trigger words: ororon,blue hair,heterochromia,blue eyes,pink eyes,animal ears,scarf,hood up,tattoo,hair between eyes, I publish the LoRA for personal use and not for commercial or profit-making purposes. please consider making a buzz donation, it helps to create new LoRAs. If you want a LoRA you can check my profile for open commissions or ask in DM https:&#x2F;&#x2F;pixai.art&#x2F;@aki21 https:&#x2F;&#x2F;tensor.art&#x2F;u&#x2F;617140264737342212 https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;858920&#x2F;ororon-xl-genshin-impact ## Trigger words You should use `ororon` to trigger the image generation. You should use `blue hair` to trigger the image generation. You should use `heterochromia` to trigger the image generation. You should use `blue eyes` to trigger the image generation. You should use `pink eyes` to trigger the image generation. You should use `animal ears` to trigger the image generation. You should use `scarf` to trigger the image generation. You should use `hood up` to trigger the image generation. You should use `tattoo` to trigger the image generation. You should use `hair between eyes` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Yastreb/Ororon-XL-Genshin-Impact/tree/main) them in the Files & versions tab.
Yastreb/chastity-belt-XL-pony
Yastreb
2024-10-27T16:40:01Z
121
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:John6666/prefect-pony-xl-v3-sdxl", "base_model:adapter:John6666/prefect-pony-xl-v3-sdxl", "region:us" ]
text-to-image
2024-10-27T16:39:52Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- score_9, score_8_up, score_7_up,source_anime, high res image,masterpiece,best quality,woman,cute face,clear skin,shiny hair,ultra detailed eyes, simple background, dress <lora:chastity belt_Pony_V1.0:1> chastity belt, parameters: negative_prompt: >- score_6, score_5, score_4, ugly face, low res, interlocked fingers, anatomically incorrect hands, bad anatomy, pony, furry, censored,realistic output: url: images/00000-730117590.png base_model: John6666/prefect-pony-xl-v3-sdxl instance_prompt: chastity belt --- # chastity-belt-XL-pony <Gallery /> ## Model description The strength of Lora is recommended to be around 1.0. XL:It would be crazy if it were made into any other outfit, so it is not versatile. https:&#x2F;&#x2F;civitai.com&#x2F;models&#x2F;510204&#x2F;chastity-beltxlpony ## Trigger words You should use `chastity belt` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Yastreb/chastity-belt-XL-pony/tree/main) them in the Files & versions tab.
vmayoral/act_u850_test
vmayoral
2024-10-27T16:31:42Z
6
0
lerobot
[ "lerobot", "safetensors", "act", "model_hub_mixin", "pytorch_model_hub_mixin", "robotics", "region:us" ]
robotics
2024-10-27T16:31:30Z
--- library_name: lerobot tags: - act - model_hub_mixin - pytorch_model_hub_mixin - robotics --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: https://github.com/huggingface/lerobot - Docs: [More Information Needed]
Yastreb/Flat-Chastity-Cage-Concept-Pony
Yastreb
2024-10-27T16:27:15Z
114
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:John6666/prefect-pony-xl-v3-sdxl", "base_model:adapter:John6666/prefect-pony-xl-v3-sdxl", "region:us" ]
text-to-image
2024-10-27T16:26:54Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: score_9, score_8_up, score_7_up, score_6_up, source_anime, parameters: negative_prompt: >- score_1, score_2, score_3, score_4, signature, monochrome,fat, bbw, chubby, plump, thick, chibi, loli, child, (wide hips, thick thighs, thick_ass, big ass, huge ass, large ass, big ass:2), twitter username, twitter logo, mosaic censoring, censored, bar censor, (underwear:1q.4), male, 1boy, bad hands, (pussy, clitoris, vagina, penis, cock, dick) output: url: images/00338-1568811375.png base_model: John6666/prefect-pony-xl-v3-sdxl instance_prompt: f1atcag3, chastity ring --- # Flat-Chastity-Cage-[Concept]-[Pony] <Gallery /> ## Model description Recommended Weights: 0.6-1.0 Adetailer &amp; Hi-Res Fix Recommended [should also work with males.] Triggerwords: Cage: f1atcag3, chastity ring Chastity belt (is optional): chastity belt Chastity cum: cumdrip, leaking cum &#x2F; precum drip, leaking precum Lock on cage: lock optional: dick, cock, penis in negative prompt. Feel free to provide Feedback and share your gens! ## Trigger words You should use `f1atcag3` to trigger the image generation. You should use `chastity ring` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Yastreb/Flat-Chastity-Cage-Concept-Pony/tree/main) them in the Files & versions tab.
Melvinjj/bert_results
Melvinjj
2024-10-27T16:19:00Z
164
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-27T16:18:46Z
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: bert_results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_results This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - epoch: 1.0 - eval_accuracy: 0.9426 - eval_loss: 0.1162 - eval_runtime: 12198.6693 - eval_samples_per_second: 61.712 - eval_steps_per_second: 1.929 - step: 47051 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
Lareb00/model_large_batch-small-emotion-small-emotion-small-emotion-small-emotion-small-emotion
Lareb00
2024-10-27T16:18:53Z
117
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-27T16:06:34Z
--- library_name: transformers license: mit base_model: lareb00/model_large_batch tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: model_large_batch-small-emotion-small-emotion-small-emotion-small-emotion-small-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_large_batch-small-emotion-small-emotion-small-emotion-small-emotion-small-emotion This model is a fine-tuned version of [lareb00/model_large_batch](https://huggingface.co/lareb00/model_large_batch) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7743 - Accuracy: 0.633 - F1: 0.6097 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:| | No log | 0.9936 | 39 | 0.7968 | 0.6285 | 0.6048 | | No log | 1.9873 | 78 | 0.7787 | 0.631 | 0.6069 | | No log | 2.9809 | 117 | 0.7743 | 0.633 | 0.6097 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf
RichardErkhov
2024-10-27T16:11:49Z
29
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-10-27T11:11:38Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) DarkForest-20B-v3.0 - GGUF - Model creator: https://huggingface.co/TeeZee/ - Original model: https://huggingface.co/TeeZee/DarkForest-20B-v3.0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [DarkForest-20B-v3.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q2_K.gguf) | Q2_K | 6.91GB | | [DarkForest-20B-v3.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q3_K_S.gguf) | Q3_K_S | 8.06GB | | [DarkForest-20B-v3.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q3_K.gguf) | Q3_K | 9.04GB | | [DarkForest-20B-v3.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q3_K_M.gguf) | Q3_K_M | 9.04GB | | [DarkForest-20B-v3.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q3_K_L.gguf) | Q3_K_L | 9.9GB | | [DarkForest-20B-v3.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.IQ4_XS.gguf) | IQ4_XS | 10.01GB | | [DarkForest-20B-v3.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q4_0.gguf) | Q4_0 | 10.52GB | | [DarkForest-20B-v3.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.IQ4_NL.gguf) | IQ4_NL | 10.57GB | | [DarkForest-20B-v3.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q4_K_S.gguf) | Q4_K_S | 10.59GB | | [DarkForest-20B-v3.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q4_K.gguf) | Q4_K | 11.22GB | | [DarkForest-20B-v3.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q4_K_M.gguf) | Q4_K_M | 11.22GB | | [DarkForest-20B-v3.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q4_1.gguf) | Q4_1 | 11.67GB | | [DarkForest-20B-v3.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q5_0.gguf) | Q5_0 | 12.83GB | | [DarkForest-20B-v3.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q5_K_S.gguf) | Q5_K_S | 12.83GB | | [DarkForest-20B-v3.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q5_K.gguf) | Q5_K | 13.18GB | | [DarkForest-20B-v3.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q5_K_M.gguf) | Q5_K_M | 13.18GB | | [DarkForest-20B-v3.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q5_1.gguf) | Q5_1 | 13.98GB | | [DarkForest-20B-v3.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q6_K.gguf) | Q6_K | 15.28GB | | [DarkForest-20B-v3.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/TeeZee_-_DarkForest-20B-v3.0-gguf/blob/main/DarkForest-20B-v3.0.Q8_0.gguf) | Q8_0 | 19.79GB | Original model description: --- license: other tags: - merge - not-for-all-audiences license_name: microsoft-research-license --- # DarkForest 20B v3.0 ![image/png](https://huggingface.co/TeeZee/DarkForest-20B-v3.0/resolve/main/DarkForest-20B-v3.0.jpg) ## Model Details - To create this model five step procedure was used. - The resulting model has approximately 20 billion parameters. - details of merge steps are in files: - [darkforest_v3_step1.yml ](https://huggingface.co/TeeZee/DarkForest-20B-v3.0/blob/main/darkforest_v3_step1.yml) - [darkforest_v3_step2.yml ](https://huggingface.co/TeeZee/DarkForest-20B-v3.0/blob/main/darkforest_v3_step2.yml) - [darkforest_v3_step3.yml ](https://huggingface.co/TeeZee/DarkForest-20B-v3.0/blob/main/darkforest_v3_step3.yml) - [darkforest_v3_step4.yml ](https://huggingface.co/TeeZee/DarkForest-20B-v3.0/blob/main/darkforest_v3_step4.yml) - [darkforest_v3_step5.yml ](https://huggingface.co/TeeZee/DarkForest-20B-v3.0/blob/main/darkforest_v3_step5.yml) ## Models used - custom model, based on athirdpath/Orca-2-13b-Alpaca-Uncensored and KoboldAI/LLaMA2-13B-Erebus-v3 - BigMaid-20B-v2.0 - athirdpath/Harmonia-20B - athirdpath/Iambe-RP-v3-20b ## Models removed - jebcarter_psyonic-cetacean-20B ## Merge method - all merges done in float32 precision, when applicable, breadcrumbs_ties merge method was used. **Warning: This model can produce NSFW content!** ## Results - main difference to v2.x - model follows much better character cards and also user profile. - produces SFW nad NSFW content without issues, switches context seamlessly. - good at following instructions. - good at tracking multiple characters in one scene. - very creative, scenarios produced are mature and complicated, model doesn't shy from writing about PTSD, mental issues or complicated relationships. - NSFW output is more creative and suprising than typical limaRP output. - definitely for mature audiences, not only because of vivid NSFW content but also because of overall maturity of stories it produces. - This is NOT Harry Potter level storytelling. All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel: <a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
dima806/food_type_image_detection_new
dima806
2024-10-27T16:02:57Z
230
1
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-09-16T10:26:18Z
--- license: apache-2.0 metrics: - accuracy - f1 base_model: - google/vit-base-patch16-224-in21k --- See https://www.kaggle.com/code/dima806/food-type-detection-vit for more details.
RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf
RichardErkhov
2024-10-27T16:00:16Z
17
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-27T15:39:03Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) BongLlama-1.1B-Chat-alpha-v0 - GGUF - Model creator: https://huggingface.co/lumatic-ai/ - Original model: https://huggingface.co/lumatic-ai/BongLlama-1.1B-Chat-alpha-v0/ | Name | Quant method | Size | | ---- | ---- | ---- | | [BongLlama-1.1B-Chat-alpha-v0.Q2_K.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q2_K.gguf) | Q2_K | 0.4GB | | [BongLlama-1.1B-Chat-alpha-v0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [BongLlama-1.1B-Chat-alpha-v0.Q3_K.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q3_K.gguf) | Q3_K | 0.51GB | | [BongLlama-1.1B-Chat-alpha-v0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [BongLlama-1.1B-Chat-alpha-v0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [BongLlama-1.1B-Chat-alpha-v0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [BongLlama-1.1B-Chat-alpha-v0.Q4_0.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q4_0.gguf) | Q4_0 | 0.59GB | | [BongLlama-1.1B-Chat-alpha-v0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [BongLlama-1.1B-Chat-alpha-v0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [BongLlama-1.1B-Chat-alpha-v0.Q4_K.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q4_K.gguf) | Q4_K | 0.62GB | | [BongLlama-1.1B-Chat-alpha-v0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [BongLlama-1.1B-Chat-alpha-v0.Q4_1.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q4_1.gguf) | Q4_1 | 0.65GB | | [BongLlama-1.1B-Chat-alpha-v0.Q5_0.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q5_0.gguf) | Q5_0 | 0.71GB | | [BongLlama-1.1B-Chat-alpha-v0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [BongLlama-1.1B-Chat-alpha-v0.Q5_K.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q5_K.gguf) | Q5_K | 0.73GB | | [BongLlama-1.1B-Chat-alpha-v0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [BongLlama-1.1B-Chat-alpha-v0.Q5_1.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q5_1.gguf) | Q5_1 | 0.77GB | | [BongLlama-1.1B-Chat-alpha-v0.Q6_K.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q6_K.gguf) | Q6_K | 0.84GB | | [BongLlama-1.1B-Chat-alpha-v0.Q8_0.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- license: mit datasets: - lumatic-ai/BongChat-v0-10k language: - bn - en metrics: - accuracy library_name: transformers pipeline_tag: text-generation tags: - text-generation-inference - sft - llama - bongllama - tinyllama - llm --- <style> img{ width: 45vw; height: 45vh; margin: 0 auto; display: flex; align-items: center; justify-content: center; } </style> # lumaticai/BongLlama-1.1B-Chat-alpha-v0 Introducing BongLlama by LumaticAI. A finetuned version of TinyLlama 1.1B Chat on Bengali Dataset. <img class="custom-image" src="bong_llama.png" alt="BongLlama"> # Model Details ## Model Description Bongllama is a sub-part of our company&#39;s initiative for developing Indic and Regional Large Language Models. We are LumaticAI continuously working on helping our clients build Custom AI Solutions for their organization. We have taken an initiative to launch open source models specific to regions and languages. Bongllama is a LLM built for West Bengal on Bengali dataset. It&#39;s a 1.1B parameters model. We have used a Bengali dataset of 10k data i.e lumatic-ai/BongChat-10k-v0 and finetuned on TinyLlama/TinyLlama-1.1B-Chat-v1.0 model to get our BongLlama 1.1B Chat Alpha v0 model. We are continuously working on training and developing this model and improve it. We are also going to launch this model with various sizes of different LLM&#39;s and Datasets. - **Developed by:** LumaticAI - **Shared by [Optional]:** LumaticAI - **Model type:** Language model - **Language(s) (NLP):** en, bn - **License:** mit - **Parent Model:** TinyLlama/TinyLlama-1.1B-Chat-v1.0 # Uses ## Direct Use - base model for further finetuning - get an overview of how indic LLM work on specific language - for fun ## Downstream Use - can be deployed with api - used to create webapp or app to show demo ## Out-of-Scope Use - cannot be used for production purpose - cannot be used to generate text for research or academic purposes # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ### Pipeline ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import pipeline def formatted_prompt(question)-> str: return f"<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant:" hub_model_name = "lumatic-ai/BongLlama-1.1B-Chat-alpha-v0" tokenizer = AutoTokenizer.from_pretrained(hub_model_name) pipe = pipeline( "text-generation", model=hub_model_name, torch_dtype=torch.float16, device_map="auto", ) from time import perf_counter start_time = perf_counter() prompt = formatted_prompt('হ্যালো') sequences = pipe( prompt, do_sample=True, temperature=0.1, top_p=0.9, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_new_tokens=256 ) for seq in sequences: print(f"Result: {seq['generated_text']}") output_time = perf_counter() - start_time print(f"Time taken for inference: {round(output_time,2)} seconds") ``` ### Streaming Response (ChatGPT, Bard like) ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer def formatted_prompt(question)-> str: return f"<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant:" hub_model_name = "lumatic-ai/BongLlama-1.1B-Chat-alpha-v0" tokenizer = AutoTokenizer.from_pretrained(hub_model_name) model = AutoModelForCausalLM.from_pretrained(hub_model_name) prompt = formatted_prompt('prompt here') inputs = tokenizer([prompt], return_tensors="pt") streamer = TextStreamer(tokenizer) _ = model.generate(**inputs, eos_token_id=[tokenizer.eos_token_id],streamer=streamer, max_new_tokens=256) ``` ### Using Generation Config ``` import torch from transformers import GenerationConfig from time import perf_counter def formatted_prompt(question)-> str: return f"<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant:" hub_model_name = "lumatic-ai/BongLlama-1.1B-Chat-alpha-v0" tokenizer = AutoTokenizer.from_pretrained(hub_model_name) model = AutoModelForCausalLM.from_pretrained(hub_model_name) prompt = formatted_prompt('হ্যালো') # Check for GPU availability if torch.cuda.is_available(): device = "cuda" else: device = "cpu" # Move model and inputs to the GPU (if available) model.to(device) inputs = tokenizer(prompt, return_tensors="pt").to(device) generation_config = GenerationConfig( penalty_alpha=0.6, do_sample=True, top_k=5, temperature=0.5, repetition_penalty=1.2, max_new_tokens=256, pad_token_id=tokenizer.eos_token_id ) start_time = perf_counter() outputs = model.generate(**inputs, generation_config=generation_config) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) output_time = perf_counter() - start_time print(f"Time taken for inference: {round(output_time, 2)} seconds") ``` </details> # Training Details ## Training Data we used our dataset of 10k data which consists of Questions and Responses. The dataset name is lumatic-ai/BongChat-v0-10k. **Example Data** - &lt;|im_start|&gt;user এটি কি ইতিবাচক পর্যালোচনা? এই ফ্র্যাঞ্চাইজি কখনও হার্ট স্টপ স্টান্টস, ফাইট সিকোয়েন্স এবং তারকা শক্তি বন্ধ করে হতাশ করে না।এবার মূল চারটি ছাড়াও নতুন মুখগুলি হলেন আবুয়েলিতা চরিত্রে রিতা মোরেনো এবং দান্তে জেসন মোমোয়া যিনি শোটি চুরি করেছেন।&lt;|im_end|&gt; &lt;|im_start|&gt;assistant এটি একটি খুব ইতিবাচক পর্যালোচনা, এটি ইঙ্গিত করে যে ফ্র্যাঞ্চাইজি শ্রোতাদের জন্য আরও পছন্দসই চরিত্রগুলি যুক্ত করার পাশাপাশি অতীতের প্রত্যাশার ভিত্তিতে বিতরণ করেছে।&lt;|im_end|&gt; ## Training Procedure ### Preprocessing - Dataset Format &lt;|im_start|&gt;user &lt;question&gt;&lt;|im_end|&gt; &lt;|im_start|&gt;assistant &lt;response&gt;&lt;|im_end|&gt; ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0 # Evaluation ### Metrics - train/loss - steps ## Results ||\_runtime|\_timestamp|train/epoch|train/total\_flos|train/train\_loss|train/global\_step|train/train\_steps\_per\_second|train/loss|train/train\_samples\_per\_second|train/train\_runtime|\_step|train/learning\_rate| |---|---|---|---|---|---|---|---|---|---|---|---|---| |0|205\.76071906089783|1705483341\.4811552|0\.08|||100||1\.2865|||0|0\.0001869158878504673| |1|406\.9242510795593|1705483542\.6446872|0\.17|||200||1\.0698|||1|0\.00019964245392895794| |2|607\.5763952732086|1705483743\.2968314|0\.25|||300||1\.0457|||2|0\.00019846317589644678| |3|808\.9941129684448|1705483944\.714549|0\.34|||400||1\.0131|||3|0\.00019646988832610704| |4|1012\.7936038970947|1705484148\.51404|0\.42|||500||1\.0|||4|0\.00019367907001906532| |5|1217\.8231673240662|1705484353\.5436034|0\.51|||600||0\.9913|||5|0\.0001901137930801933| |6|1422\.651272058487|1705484558\.3717082|0\.59|||700||0\.9904|||6|0\.00018580353217762766| |7|1624\.9901471138|1705484760\.7105832|0\.67|||800||0\.9705|||7|0\.0001807839208713596| |8|1827\.1909170150757|1705484962\.911353|0\.76|||900||0\.9661|||8|0\.00017509645702535999| |9|2033\.6470217704773|1705485169\.3674579|0\.84|||1000||0\.9588|||9|0\.00016878815973864268| |10|2241\.5517098903656|1705485377\.272146|0\.93|||1100||0\.9469|||10|0\.00016191118063146672| |11|2446\.751221895218|1705485582\.471658|1\.01|||1200||0\.9453|||11|0\.0001545223727002313| |12|2648\.367230653763|1705485784\.0876667|1\.09|||1300||0\.9329|||12|0\.0001466828203054036| |13|2849\.9791855812073|1705485985\.6996217|1\.18|||1400||0\.9299|||13|0\.0001384573341781387| |14|3050\.282051086426|1705486186\.0024872|1\.26|||1500||0\.9181|||14|0\.00012991391562044527| |15|3252\.6823406219482|1705486388\.4027767|1\.35|||1600||0\.917|||15|0\.00012112319432843371| |16|3456\.3907039165497|1705486592\.11114|1\.43|||1700||0\.919|||16|0\.00011215784448624378| |17|3658\.387463569641|1705486794\.1078997|1\.52|||1800||0\.9156|||17|0\.00010309198395788984| |18|3860\.850716114044|1705486996\.5711522|1\.6|||1900||0\.9074|||18|9\.400056154399221e-05| |19|4063\.906144142151|1705487199\.6265802|1\.68|||2000||0\.9072|||19|8\.49587373690336e-05| |20|4266\.29203081131|1705487402\.012467|1\.77|||2100||0\.9061|||20|7\.604126152157019e-05| |21|4468\.759161949158|1705487604\.479598|1\.85|||2200||0\.9104|||21|6\.732185608427e-05| |22|4671\.109050750732|1705487806\.8294868|1\.94|||2300||0\.9016|||22|5\.8872605662626776e-05| |23|4875\.181975841522|1705488010\.902412|2\.02|||2400||0\.8957|||23|5\.076336145093832e-05| |24|5077\.5954213142395|1705488213\.3158574|2\.11|||2500||0\.8948|||24|4\.3061163762223156e-05| |25|5280\.958572149277|1705488416\.6790082|2\.19|||2600||0\.8833|||25|3\.582968779610564e-05| |26|5483\.901570320129|1705488619\.6220064|2\.27|||2700||0\.9019|||26|2\.912871722658781e-05| |27|5684\.498034954071|1705488820\.218471|2\.36|||2800||0\.8921|||27|2\.30136499616351e-05| |28|5885\.339627027512|1705489021\.0600631|2\.44|||2900||0\.8897|||28|1\.753504016053409e-05| |29|6089\.49475812912|1705489225\.2151942|2\.53|||3000||0\.8765|||29|1\.2738180295232205e-05| |30|6291\.281028032303|1705489427\.0014641|2\.61|||3100||0\.889|||30|8\.662726710819169e-06| |31|6494\.627055644989|1705489630\.3474917|2\.69|||3200||0\.8846|||31|5\.342371780697386e-06| |32|6695\.168158054352|1705489830\.8885942|2\.78|||3300||0\.8908|||32|2\.804565366782108e-06| |33|6898\.186992406845|1705490033\.9074285|2\.86|||3400||0\.885|||33|1\.0702878874610523e-06| |34|7099\.970013856888|1705490235\.69045|2\.95|||3500||0\.8871|||34|1\.5387686939386526e-07| |35|7221\.330135822296|1705490357\.050572|3\.0|8\.3571998449877e+16|0\.9397975607756582|3561|0\.491||3\.926|7259\.0631|35|| # Model Examination We will be further finetuning this model on large dataset to see how it performs # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 1 X Tesla T4 - **Hours used:** 2.21 - **Cloud Provider:** Google Colab - **Compute Region:** India - **Carbon Emitted:** 0.14 # Technical Specifications ## Model Architecture and Objective Finetuned on Tiny-Llama 1.1B Chat model ### Hardware 1 X Tesla T4 # Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ``` @misc{BongLlama-1.1B-Chat-alpha-v0, url={[https://huggingface.co/lumatic-ai/BongLlama-1.1B-Chat-alpha-v0](https://huggingface.co/lumatic-ai/BongLlama-1.1B-Chat-alpha-v0)}, title={BongLlama 1.1B Chat Aplha V0}, author={LumaticAI, Rohan Shaw, Vivek Kushal, Jeet Ghosh}, year={2024}, month={Jan} } ``` # Model Card Authors lumatic-ai # Model Card Contact email : [email protected]
MikeRoz/TheDrummer_Behemoth-123B-v1.1-5.0bpw-h6-exl2
MikeRoz
2024-10-27T15:58:31Z
5
4
null
[ "safetensors", "mistral", "license:other", "5-bit", "exl2", "region:us" ]
null
2024-10-27T11:28:10Z
--- license: other --- # Join our Discord! https://discord.gg/Nbv9pQ88Xb ## Nearly 2000 members strong 💪 --- [BeaverAI](https://huggingface.co/BeaverAI) proudly presents... # Behemoth 123B v1.1 🦣 - Creative Edition *When you spend your whole life living under a dome, even the idea of an ocean seems impossible to imagine.* ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/5405NZoj_ptSMO_qM09EW.png) ## Description > One of the few other models that's done this for me is the OG Command R 35B. So seeing Behemoth v1.1 have a similar feel to that but with much higher general intelligence really makes it a favourite of mine > I was real happy with v1.1 the other day. I've done some tests on v1 and it's a lot better. > v1 had those glimpses of creativity, but now it's more consistent (with v1.1). It feels like a new model in comparison. > v1 had slop bro. v1.1 makes it irrelevant. The jump is like 720p to 4k. Seriously. > The creativity for v1.1 is off the charts compared to v1, like it's juiced. v1 had these moments that I would say... 'Shit, let I never seen a model respond with prose like this, let me regenerate to see what else I get.' Now, even though every regeneration had a flow of possibilities, sometimes, those possibilities never came. v1.1 is comparable to xxx for the first time, every generation. It directs and guides the scene, scenario and characters unlike anything else > It's about the f***ing prose man. The atmosphere that revolves around the characters. Not just the damn dialogue or introspection. v1.1 will pull from a message 7 generations ago. That window I opened will appear in a future response with the noise from the courtyard filtering through it. The experience of not knowing what this model will produce because it's different than anything else is what keeps it engaging. ## Links - Original: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1 - GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1-GGUF - iMatrix: WIP ## Arsenal (Supported Chat Templates) - Mistral - Smart, adaptable, familiar - Metharme (Pygmalion in ST) - Creative, unhinged, unique - Alpaca - Creative, unique, unhinged - Text Completion - You can mix it up and see which works best for you. ### Favorite RP Format `*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV ## What's Next? - Already have plans for a v2! ## Special Thanks - Thank you to each and everyone who donated in [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier. - KinjiHakari777, Dr. Fjut, Kistara, Pseudo, AlexTheVP, Dakkidaze, EvarinSharath'fe, ONTHEREDTEAM, F, Mariana, Garg, Silva, Grozi, & **Phaelon** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/KvyYIIA1zkxQNEdGro007.png) <audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf
RichardErkhov
2024-10-27T15:57:21Z
10
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-27T08:06:01Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) internlm2-math-20b-llama - GGUF - Model creator: https://huggingface.co/bartowski/ - Original model: https://huggingface.co/bartowski/internlm2-math-20b-llama/ | Name | Quant method | Size | | ---- | ---- | ---- | | [internlm2-math-20b-llama.Q2_K.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q2_K.gguf) | Q2_K | 7.03GB | | [internlm2-math-20b-llama.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q3_K_S.gguf) | Q3_K_S | 8.16GB | | [internlm2-math-20b-llama.Q3_K.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q3_K.gguf) | Q3_K | 9.05GB | | [internlm2-math-20b-llama.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q3_K_M.gguf) | Q3_K_M | 9.05GB | | [internlm2-math-20b-llama.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q3_K_L.gguf) | Q3_K_L | 9.83GB | | [internlm2-math-20b-llama.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.IQ4_XS.gguf) | IQ4_XS | 10.12GB | | [internlm2-math-20b-llama.Q4_0.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q4_0.gguf) | Q4_0 | 10.55GB | | [internlm2-math-20b-llama.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.IQ4_NL.gguf) | IQ4_NL | 10.65GB | | [internlm2-math-20b-llama.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q4_K_S.gguf) | Q4_K_S | 10.62GB | | [internlm2-math-20b-llama.Q4_K.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q4_K.gguf) | Q4_K | 11.16GB | | [internlm2-math-20b-llama.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q4_K_M.gguf) | Q4_K_M | 11.16GB | | [internlm2-math-20b-llama.Q4_1.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q4_1.gguf) | Q4_1 | 11.67GB | | [internlm2-math-20b-llama.Q5_0.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q5_0.gguf) | Q5_0 | 12.79GB | | [internlm2-math-20b-llama.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q5_K_S.gguf) | Q5_K_S | 12.79GB | | [internlm2-math-20b-llama.Q5_K.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q5_K.gguf) | Q5_K | 13.11GB | | [internlm2-math-20b-llama.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q5_K_M.gguf) | Q5_K_M | 13.11GB | | [internlm2-math-20b-llama.Q5_1.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q5_1.gguf) | Q5_1 | 13.91GB | | [internlm2-math-20b-llama.Q6_K.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q6_K.gguf) | Q6_K | 15.18GB | | [internlm2-math-20b-llama.Q8_0.gguf](https://huggingface.co/RichardErkhov/bartowski_-_internlm2-math-20b-llama-gguf/blob/main/internlm2-math-20b-llama.Q8_0.gguf) | Q8_0 | 19.66GB | Original model description: --- pipeline_tag: text-generation license: other --- # InternLM <div align="center"> <img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">InternLM</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="4">HOT</font></i> </a> </sup> <div>&nbsp;</div> </div> [![evaluation](https://github.com/InternLM/InternLM/assets/22529082/f80a2a58-5ddf-471a-8da4-32ab65c8fd3b)](https://github.com/internLM/OpenCompass/) [💻Github Repo](https://github.com/InternLM/InternLM) </div> ## Converted using <a href="https://huggingface.co/chargoddard">Charles Goddard's</a> conversion script to create llama models from internlm Original REPO link: https://huggingface.co/internlm/internlm2-math-20b ExLLamaV2 link: https://huggingface.co/bartowski/internlm2-math-20b-llama-exl2
sridharsamala/gita-text-generation-gpt2
sridharsamala
2024-10-27T15:55:35Z
134
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-27T15:54:50Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ykaneda/sd-class-butterflies-32
ykaneda
2024-10-27T15:47:03Z
45
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2024-10-27T15:46:40Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('ykaneda/sd-class-butterflies-32') image = pipeline().images[0] image ```
Bonbone/Translator-kde4
Bonbone
2024-10-27T15:43:48Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "dataset:kde4", "base_model:Helsinki-NLP/opus-mt-en-fr", "base_model:finetune:Helsinki-NLP/opus-mt-en-fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-27T15:05:04Z
--- library_name: transformers license: apache-2.0 base_model: Helsinki-NLP/opus-mt-en-fr tags: - generated_from_trainer datasets: - kde4 model-index: - name: Translator-kde4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Translator-kde4 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.45.2 - Pytorch 2.3.0+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1
arnaudstiegler/game-n-gen-finetuned-23k-no-cfg
arnaudstiegler
2024-10-27T15:39:42Z
5
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers-training", "lora", "base_model:CompVis/stable-diffusion-v1-4", "base_model:adapter:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-09-04T15:12:23Z
--- base_model: CompVis/stable-diffusion-v1-4 library_name: diffusers license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training - lora - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - diffusers-training inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # LoRA text2image fine-tuning - arnaudstiegler/sd-model-gameNgen These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the arnaudstiegler/gameNgen_test_dataset dataset. You can find some example images in the following. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details Command: ``` python train_text_to_image.py --dataset_name P-H-B-D-a16z/ViZDoom-Deathmatch-PPO-Lrg --gradient_checkpointing --learning_rate 5e-5 --train_batch_size 8 --num_train_epochs 10 --validation_steps 250 --output_dir sd-model-finetune --push_to_hub --report_to wandb ```
jebish7/indicbert-A
jebish7
2024-10-27T15:36:34Z
106
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-27T15:36:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
blobber93/donut-base-sroie
blobber93
2024-10-27T15:19:29Z
49
0
transformers
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "base_model:finetune:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-10-25T10:34:35Z
--- library_name: transformers license: mit base_model: naver-clova-ix/donut-base tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-sroie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.46.0 - Pytorch 2.5.1+xpu - Datasets 3.0.2 - Tokenizers 0.20.1
Ahmedhany216/Monglish_Arabic_FAQ
Ahmedhany216
2024-10-27T14:55:38Z
106
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:CAMeL-Lab/bert-base-arabic-camelbert-msa", "base_model:finetune:CAMeL-Lab/bert-base-arabic-camelbert-msa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-27T14:19:46Z
--- base_model: CAMeL-Lab/bert-base-arabic-camelbert-msa library_name: transformers license: apache-2.0 metrics: - accuracy - f1 - precision - recall tags: - generated_from_trainer model-index: - name: Monglish_Arabic_FAQ results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Monglish_Arabic_FAQ This model is a fine-tuned version of [CAMeL-Lab/bert-base-arabic-camelbert-msa](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0526 - Accuracy: 0.9885 - F1: 0.9884 - Precision: 0.9888 - Recall: 0.9885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.1639 | 1.0 | 520 | 0.3404 | 0.9577 | 0.9573 | 0.9696 | 0.9577 | | 0.0438 | 2.0 | 1040 | 0.0681 | 0.9885 | 0.9886 | 0.9891 | 0.9885 | | 0.021 | 3.0 | 1560 | 0.0526 | 0.9885 | 0.9884 | 0.9888 | 0.9885 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
James2313123/L3-DARKEST-PLANET-Seven-Rings-Of-DOOM-16.5B_5bpw-h8-EXL2
James2313123
2024-10-27T14:53:26Z
6
0
null
[ "safetensors", "llama", "exl2", "5bpw", "en", "license:apache-2.0", "5-bit", "region:us" ]
null
2024-10-27T14:11:23Z
--- license: apache-2.0 language: - en base_model: DavidAU/DARKEST-PLANET-Seven-Rings-Of-DOOM-16.5B quantized_by: James2313123 tags: - exl2 - 5bpw --- ### Model Description 5bpw-h8-exl2 quant of DavidAU's DARKEST-PLANET-Seven-Rings-Of-DOOM-16.5B Link to orginal model and creator: https://huggingface.co/DavidAU/L3-DARKEST-PLANET-Seven-Rings-Of-DOOM-16.5B
LBK95/Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V2
LBK95
2024-10-27T14:42:01Z
12
0
peft
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-10-27T07:41:54Z
--- base_model: meta-llama/Llama-2-7b-hf library_name: peft license: llama2 tags: - trl - dpo - generated_from_trainer model-index: - name: Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V2 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2147 - Rewards/chosen: -2.3589 - Rewards/rejected: -2.1848 - Rewards/accuracies: 0.3333 - Rewards/margins: -0.1740 - Logps/rejected: -176.9075 - Logps/chosen: -185.7344 - Logits/rejected: -0.3397 - Logits/chosen: -0.3554 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.7064 | 0.3020 | 77 | 0.7263 | -0.0650 | -0.0237 | 0.5 | -0.0414 | -155.2957 | -162.7962 | 0.2969 | 0.2895 | | 0.6816 | 0.6039 | 154 | 0.7127 | -0.1015 | -0.1222 | 0.5 | 0.0207 | -156.2813 | -163.1606 | 0.2989 | 0.2915 | | 0.6192 | 0.9059 | 231 | 0.7010 | -0.0808 | -0.1624 | 0.5833 | 0.0816 | -156.6835 | -162.9536 | 0.2774 | 0.2692 | | 0.2805 | 1.2078 | 308 | 0.8302 | -0.5931 | -0.6582 | 0.6667 | 0.0651 | -161.6412 | -168.0767 | 0.1922 | 0.1839 | | 0.3604 | 1.5098 | 385 | 0.8663 | -0.8552 | -0.8899 | 0.5833 | 0.0347 | -163.9578 | -170.6977 | 0.0866 | 0.0775 | | 0.3524 | 1.8118 | 462 | 0.9587 | -1.3495 | -1.3440 | 0.5 | -0.0055 | -168.4993 | -175.6406 | -0.0538 | -0.0645 | | 0.2168 | 2.1137 | 539 | 1.0785 | -1.8309 | -1.7601 | 0.5833 | -0.0708 | -172.6597 | -180.4545 | -0.2246 | -0.2382 | | 0.0395 | 2.4157 | 616 | 1.2284 | -2.4130 | -2.2406 | 0.3333 | -0.1724 | -177.4654 | -186.2757 | -0.3472 | -0.3633 | | 0.2081 | 2.7176 | 693 | 1.2147 | -2.3589 | -2.1848 | 0.3333 | -0.1740 | -176.9075 | -185.7344 | -0.3397 | -0.3554 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
YAHTHANT/gita-text-generation-gpt2
YAHTHANT
2024-10-27T14:35:50Z
130
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "base_model:openai/whisper-large-v3-turbo", "base_model:finetune:openai/whisper-large-v3-turbo", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-27T12:45:12Z
--- library_name: transformers license: mit base_model: - openai/whisper-large-v3-turbo --- # Model Card for Model ID Model Card for {{ yahthant | default("yahthant", true) }} ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details Training Data: sumanthk/PEFT_expo ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Pantheon_ChatWaifu_V0.2-GGUF
mradermacher
2024-10-27T14:31:08Z
8
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Triangle104/Pantheon_ChatWaifu_V0.2", "base_model:quantized:Triangle104/Pantheon_ChatWaifu_V0.2", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-27T11:59:20Z
--- base_model: Triangle104/Pantheon_ChatWaifu_V0.2 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Triangle104/Pantheon_ChatWaifu_V0.2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF
mradermacher
2024-10-27T14:31:08Z
13
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Triangle104/Pantheon_ChatWaifu_V0.2", "base_model:quantized:Triangle104/Pantheon_ChatWaifu_V0.2", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-27T14:08:54Z
--- base_model: Triangle104/Pantheon_ChatWaifu_V0.2 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Triangle104/Pantheon_ChatWaifu_V0.2 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Pantheon_ChatWaifu_V0.2-i1-GGUF/resolve/main/Pantheon_ChatWaifu_V0.2.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
bingbangboom/flux_oilscape
bingbangboom
2024-10-27T14:28:09Z
144
8
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-01T16:53:43Z
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: in the style of Oilstyle002 license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md widget: - text: "a white-haired young woman wearing a flower crown, a very large fiery dragon, castle in the background, in the style of Oilstyle002" output: url: extras/1.jpg - text: "a young woman wearing glasses, a baseball cap and a scarf, standing in front of old dilapidated lighthouse, crashing waves, in the style of Oilstyle002" output: url: extras/2.jpg - text: "a cat in a field of lavender flowers, in the style of Oilstyle002" output: url: extras/3.jpg --- # flux_Oilstyle <Gallery /> <table> <tr> <td><img src="./images/1.png" alt="Example 1" style="width:100%;"></td> <td><img src="./images/2.png" alt="Example 2" style="width:100%;"></td> </tr> <tr> <td><img src="./images/3.png" alt="Example 3" style="width:100%;"></td> <td><img src="./images/4.png" alt="Example 4" style="width:100%;"></td> </tr> </table> ## Model description Flux LoRA for an oil painting look. Use *in the style of Oilstyle002* to trigger the model. Trained completely on public domain images. Sample Prompt: `a barn owl emerging from the shadows of a nighttime forest, in the style of Oilstyle002` ## Trigger words You should use `in the style of Oilstyle002` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/bingbangboom/flux_Oilstyle/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
waldie/MS-Schisandra-22B-vA-6.5bpw-h6-exl2
waldie
2024-10-27T14:22:56Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-10-27T13:50:36Z
--- base_model: Nohobby/MS-Schisandra-22B-vA quantized_by: waldie library_name: transformers tags: - mergekit - merge --- # Schisandra This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della_linear merge method using [TheDrummer/Cydonia-22B-v1.2](https://huggingface.co/TheDrummer/Cydonia-22B-v1.2) as a base. ### Models Merged The following models were included in the merge: * [Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small](https://huggingface.co/Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small) * [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b) * [ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) * [spow12/ChatWaifu_v2.0_22B](https://huggingface.co/spow12/ChatWaifu_v2.0_22B) * QCmix ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: della_linear dtype: bfloat16 parameters: normalize: true int8_mask: true tokenizer_source: union base_model: TheDrummer/Cydonia-22B-v1.2 models: - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 parameters: density: 0.55 weight: 1 - model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small parameters: density: 0.55 weight: 1 - model: spow12/ChatWaifu_v2.0_22B parameters: density: 0.55 weight: 1 - model: anthracite-org/magnum-v4-22b parameters: density: 0.55 weight: 1 - model: QCmix parameters: density: 0.55 weight: 1 ```
mradermacher/Qwen-modelstock-15B-i1-GGUF
mradermacher
2024-10-27T14:12:09Z
18
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:allknowingroger/Qwen-modelstock-15B", "base_model:quantized:allknowingroger/Qwen-modelstock-15B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-27T13:31:28Z
--- base_model: allknowingroger/Qwen-modelstock-15B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/allknowingroger/Qwen-modelstock-15B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Qwen-modelstock-15B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock-15B-i1-GGUF/resolve/main/Qwen-modelstock-15B.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Mahmoud3899/Boolean_new
Mahmoud3899
2024-10-27T13:57:08Z
111
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-27T13:56:38Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/RPLament-22B-i1-GGUF
mradermacher
2024-10-27T13:51:06Z
107
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:SvdH/RPLament-22B", "base_model:quantized:SvdH/RPLament-22B", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-27T09:16:21Z
--- base_model: SvdH/RPLament-22B language: - en library_name: transformers license: other license_link: https://mistral.ai/licenses/MRL-0.1.md license_name: mrl quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/SvdH/RPLament-22B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/RPLament-22B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-IQ1_S.gguf) | i1-IQ1_S | 4.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-IQ1_M.gguf) | i1-IQ1_M | 5.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-IQ2_S.gguf) | i1-IQ2_S | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-IQ2_M.gguf) | i1-IQ2_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-Q2_K.gguf) | i1-Q2_K | 8.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 8.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-IQ3_S.gguf) | i1-IQ3_S | 9.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-IQ3_M.gguf) | i1-IQ3_M | 10.2 | | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 11.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.0 | | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-Q4_0.gguf) | i1-Q4_0 | 12.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 12.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 13.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 15.4 | | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF/resolve/main/RPLament-22B.i1-Q6_K.gguf) | i1-Q6_K | 18.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
furkanselek/furkan
furkanselek
2024-10-27T13:50:52Z
7
1
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-27T13:50:43Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - ai-toolkit widget: - text: A person in a bustling cafe furkan output: url: samples/1730036822814__000001000_0.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: furkan license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # furkan Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words You should use `furkan` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/furkanselek/furkan/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('furkanselek/furkan', weight_name='furkan.safetensors') image = pipeline('A person in a bustling cafe furkan').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf
RichardErkhov
2024-10-27T13:49:44Z
9
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-10-27T08:56:27Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) NM-12B-Lyris-dev-2 - GGUF - Model creator: https://huggingface.co/v000000/ - Original model: https://huggingface.co/v000000/NM-12B-Lyris-dev-2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [NM-12B-Lyris-dev-2.Q2_K.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q2_K.gguf) | Q2_K | 4.46GB | | [NM-12B-Lyris-dev-2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q3_K_S.gguf) | Q3_K_S | 5.15GB | | [NM-12B-Lyris-dev-2.Q3_K.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q3_K.gguf) | Q3_K | 5.67GB | | [NM-12B-Lyris-dev-2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q3_K_M.gguf) | Q3_K_M | 5.67GB | | [NM-12B-Lyris-dev-2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q3_K_L.gguf) | Q3_K_L | 6.11GB | | [NM-12B-Lyris-dev-2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.IQ4_XS.gguf) | IQ4_XS | 6.33GB | | [NM-12B-Lyris-dev-2.Q4_0.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q4_0.gguf) | Q4_0 | 6.59GB | | [NM-12B-Lyris-dev-2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.IQ4_NL.gguf) | IQ4_NL | 6.65GB | | [NM-12B-Lyris-dev-2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q4_K_S.gguf) | Q4_K_S | 6.63GB | | [NM-12B-Lyris-dev-2.Q4_K.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q4_K.gguf) | Q4_K | 6.96GB | | [NM-12B-Lyris-dev-2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q4_K_M.gguf) | Q4_K_M | 6.96GB | | [NM-12B-Lyris-dev-2.Q4_1.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q4_1.gguf) | Q4_1 | 7.26GB | | [NM-12B-Lyris-dev-2.Q5_0.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q5_0.gguf) | Q5_0 | 7.93GB | | [NM-12B-Lyris-dev-2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q5_K_S.gguf) | Q5_K_S | 7.93GB | | [NM-12B-Lyris-dev-2.Q5_K.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q5_K.gguf) | Q5_K | 8.13GB | | [NM-12B-Lyris-dev-2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q5_K_M.gguf) | Q5_K_M | 8.13GB | | [NM-12B-Lyris-dev-2.Q5_1.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q5_1.gguf) | Q5_1 | 8.61GB | | [NM-12B-Lyris-dev-2.Q6_K.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q6_K.gguf) | Q6_K | 9.37GB | | [NM-12B-Lyris-dev-2.Q8_0.gguf](https://huggingface.co/RichardErkhov/v000000_-_NM-12B-Lyris-dev-2-gguf/blob/main/NM-12B-Lyris-dev-2.Q8_0.gguf) | Q8_0 | 12.13GB | Original model description: --- base_model: - Sao10K/MN-12B-Lyra-v1 - Sao10K/MN-12B-Lyra-v3 - unsloth/Mistral-Nemo-Instruct-2407 library_name: transformers tags: - merge - mistral license: cc-by-nc-4.0 --- Lyris-dev2-Mistral-Nemo-12B-2407 ----------------------------- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/FykxidAsKvgxipFa7ZIaC.png) *EXPERIMENTAL* attempt to fix Sao10k's Lyra-V3 prompt format and stop token >and boost smarts. with strategic *LATCOS* vector similarity merging prototype, unfinished but works? Sometimes it does go on forever but it's way more useable, seems to have learnt to output stop token most of the time. But it's still pretty borked especially if greeting message is long. It needs even more Nemo-Instruct-2407 merged in. - Sao10K/MN-12B-Lyra-v1 <b>*Base*</b> - Sao10K/MN-12B-Lyra-v3 <b>*x2 Sequential PASS, order: 1, 3*</b> - unsloth/Mistral-Nemo-Instruct-2407 <b>*x1 Single PASS, order: 2*</b> - with z0.0001 value # <b>Prompt format:</b> *Mistral Instruct* ``` [INST] System Message [/INST] [INST] Name: Let's get started. Please respond based on the information and instructions provided above. [/INST] <s>[INST] Name: What is your favourite condiment? [/INST] AssistantName: Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> [INST] Name: Do you have mayonnaise recipes? [/INST] ```
Turkish-NLI/legal_nli_TR_V1
Turkish-NLI
2024-10-27T13:33:11Z
26
1
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:202000", "loss:SoftmaxLoss", "tr", "dataset:Turkish-NLI/legal_nli_TR_V1", "arxiv:1908.10084", "base_model:dbmdz/bert-base-turkish-cased", "base_model:finetune:dbmdz/bert-base-turkish-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-10-27T13:15:00Z
--- datasets: - Turkish-NLI/legal_nli_TR_V1 language: - tr library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:202000 - loss:SoftmaxLoss widget: - source_sentence: >- Davacı vekili dava dilekçesinde özetle; Müvekkili sigorta şirketi ile dava dışı ... arasında ... Sigorta Poliçesinin tanzim edildiğini, sigortalıya ait ... Mah. ... Sok.... ... adresinde kain konutta su basması sonucu 06/06/2018 tarihinde hasar oluştuğunu, müvekkili şirketin poliçe gereği zarara uğrayan sigortalıya 3.803,00 TL hasar ödemesi yapıldığını, bu ödemenin rücuen tazmini amacıyla .... İcra Müdürlüğünün ... E. Sayılı dosyası ile icra takibi başlattıklarını, davalının itirazı üzerine takibin durduğunu belirterek, davanın kabulü ile itirazın iptaline, davalı aleyhine %20'den az olmamak üzere icra inkar tazminatına hükmedilmesine karar verilmesini talep ve dava etmiştir. sentences: - >- Davacı vekili dava dilekçesinde özetle; Davacı ...’ın ... ...’nde 23/07/2013-11/06/2015 tarihlerinde başkanlık yaptığını, Kulübe nakit sağlamak amacıyla davalı ... ile anlaşma yaptığını, Faktoring İşlemlerinde Uygulanacak Usul ve Esaslar Hakkında Yönetmelik' in 8.Maddesinde " Müşterilerden ek teminat mahiyetinde olmak üzere devralınan ve fatura veya fatura yerine geçen belgeler ile ilişkili olmayan kambiyo senedi veya diğer senetlerin tahsil edilebilmesi için; a) Alacağın vadesinde ödenmeyip sorunlu hale gelmiş olması, alınan kambiyo senedi veya diğer senet karşılığında hiçbir şekilde kambiyo senedi ve diğer senedin ilgililerine finansman sağlanmaması, kuruluşun işlem ve muhasebe kayıtlarında ek teminat mahiyetinde alınan kambiyo senedi veya diğer senedin ilgili borcun teminatı karşılığında alındığına ilişkin kayıt düşülmesi Gerekir." maddesinde de görüleceği üzere faktoring şirketinin müşterilerden ek teminat talep edebileceğini, nitekim bunun dışında kambiyo senetlerinde faktoring şirketlerinin lehtar vasfına sahip olabilmesinin mümkün olmadığını, dolayısıyla alacağın temlikini içermeyen bir işlemin faktoring kapsamında değerlendirilebilmesinin de bu işlemlerin özüne aykırı olacağını, yasal düzenlemelerin Yargıtay içtihatları ve doktrin uygulamalarının bir sonucu olarak davalı tarafın takibe dayanak yaptığı 5 adet bononun davalı taraf ile spor kulubü arasında imzalanan faktoring sözleşmesinin teminatı kapsamında verilmiş olup söz konusu senetlerin teminat niteliğine haiz olduğunu, teminat senedine konu olan borcun ödendiğini, bu nedenle davalı tarafın takibinde kötüniyetli ve ağır kusurlu olduğunu, müvekkilinin .... Derneği' ne 23.07.2013 tarihinde başkan seçildiğini ve söz konusu görevi 11.06.2015 tarihine kadar sürdürdüğünü, bununla birlikte dosya kapsamında da mevcut bulunan ... müvekkilinin başkan olarak görev yaptığı yılları kapsayan Haziran 2013- Haziran 2015 dönemine ait temlik borçlanma ve ödeme bilgilerine ilişkin evrakta da açıkça görüleceği üzere müvekkili döneminde gerçekleşen temliklerin karşılığının muhtelif tarihlerde alacaklı olduğunu iddia eden ...' ne ödendiğini, ayrıca davalı tarafça ... Noterliği'nin ... yevmiye nolu müvekkiline çekilen ihtarnamede "... nezdindeki kulüp atacaklarının temliki karşılığı kullandırılan finansmanın 25.525.706,07 TL'ye ulaştığını, ...' ın sorumlu olduğu tutarın 20.500.000 TL olduğunu kulübün içinde bulunduğu sportif mali koşullar nedeniyle alacağın geri ödenmesi ciddi anlamda tehlikeye düşülmüş durumda olup, müvekkili ile kulüp arasındaki sözleşme ve bilcümle ekleri çerçevesinde hesabın kat edildiği" ihtar edildiğini, müvekkili tarafından 30.11.2018 Tarihinde davalı tarafa çekilen ... Noterliği' nin ... yevmiye nolu cevab-ı ihtarnamede borcun Ödendiğinden bahisle hesabın kat edilmesine itiraz edildiğini, davalı tarafından çekilen ihtarnamede de açıkça görüleceği üzere senetlerden hiç bahsedilmediğini; sadece ...' ye yapılan temlikten bahsedildiğini, dosyanın eki olarak sunulan ...' den 17.12.2018 tarihinde alınan belgeyle ihtarnameye konu olan borcun ödendiğinin açıkça anlaşılacağını, dolayısıyla faktoring şirketinin müvekkilinin başkanlığı döneminde doğmuş bulunan alacaklarını almış olmasına rağmen tamamen kötüniyetli olarak iş bu takibe giriştiğini, zira ... kayitlarinda da görüleceği üzere söz konusu borcun itfa sebebiyle sona erdiğini, incelendiğinde görüleceği üzere ekte sundukları ...' den alınan resmi belgede 7.000.000 TL lik temlik sarı ile belirtilen şekilde, 9.000.000 TL'ük temlik turuncuyla belirtilen şekilde, 8.500.000 TL lik temlik yeşil ile belirtilen şekilde, 1.500.000 tl'lik temlik kırmızı renkte belirtilen şekilde ödendiğini, davalı yanın, taraflarınca ... İcra Hukuk Mahkemesi' nin ... E. numarası ile takibin iptaline ilişkin açılan davaya verdikleri cevapta hiçbir şekilde bu senetlerin neye karşılık alındığını, hangi borcun teminatı olduğunu veya direkt kulübe ve müvekkiline verilen hangi paranın karşılığı alındığı konusunda hiçbir beyanda bulunmadığını, davalı şirket yetkilileri hakkında Bedelsiz senedi kullanma, açığa atılan imzanın kötüye kullanılması ve resmi belgede sahtecilik suçlarından ... CBS' nın ... sor nolu dosyası İle suç duyurusunda bulunulduğunu, müvekkilinin borcu olmayan ve vadesi sonradan doldurularak takibe konulan senetler nedeniyle haksız bir icra takibine maruz kaldığını ifade ederek müvekkilinin .... İcra Müdürlüğü' nün ... E. sayılı Dosyası ile Davalıya Borçlu olmadığının tespitine ve takibe dayanak senederin iptaline, davalıların %20'den aşağı olmamak üzere kötüniyet tazminatına mahkum edilmesine, yargılama giderlerinin davalı tarafa yükletilmesine karar verilmesini talep ve dava etmiştir. - ' Davacı vekili dava dilekçesinde özetle, davalı ... şirketine ...... sigortalı, müvekkiline ait ....... plakalı aracın 06/06/2017 tarihinde çalındığını, araç rayiç bedelinin ödenmesi için sigorta şirketine başvuruda bulunulduğunu, başvuru üzerine ...... nolu dosyanın açıldığını, akabinde noter aracılığıyla ihtar çekildiğini tüm bunlara rağmen sigorta şirketince ödeme yapılmadğını beyanla fazlaya dair hakları saklı kalmak kaydıyla şimdilik 35.000,00 Tl ile aracın rayiç bedeli belirlenerek davalıdan tahsiline karar verilmesini talep ve dava etmiştir.' - ' Davacı vekili dava dilekçesinde özetle; 23.02.2009 tarihinde dava dışı sürücü ... ... sevk ve idaresindeki ... plakalı aracı ... Mahallesi üzerinde ... Köyü istikametine seyir halinde iken yaya olarak yürümekte ve kucağında çocuğu ... ... bulunan müvekkil ... ...''a çarptığını, meydana gelen kazada ... ... vefat ettiğini, kazaya karışan ... plakalı aracın sigorta kaydı bulunmadığından/tespit edilemediğinden müvekkilin uğradığı maddi zararın giderilebilmesi için işbu davayı ... ...na karşı açma mecburiyeti hasıl olduğunu, kaza sebebi ile ölen ... ...''un desteğinden yoksun kalanlar olarak annesi ... ... ve babası ... ...''un kaldığını, müvekkillerin kaza tarihinde henüz 4 yaşında olan çocuklarını kaybetmiş, şahsın ölümü ile perişan bir duruma düştüklerini, destek bilindiği üzere yakınlarına ve yakın ilişkide bulunduğu başka kimselere sürekli ve düzenli bir biçimde yardım eden, eğer ölmeseydi ileride yardım etmesi beklenen veya büyük bir olasılıkla yardım edecek olan kişi olduğunu, dolayısıyla müvekkillerin, müteveffanın vefatı ile destekten yoksun kaldıkları açık olduğunu, zira ölenin henüz 4 yaşında bir çocuk olması göz önüne alındığından, eğer ölmeseydi ileride ailesine sürekli ve düzenli bir şekilde destek olacağının muhakkak olduğunu, hayatının her anında meydana gelen bu zamansız ölümü hatırlayıp, içlerinde derin sızılar yaşayacak olan müvekkillerin ruh sağlığı derin ve onarılmaz derecede bozulduğunu, müvekkillerin destekten yoksun kalmadan doğan zararları Sayın Mahkemece yaptırılacak bilirkişi incelemesi sonucunda ortaya çıkacağından fazlaya ilişkin dava ve talep haklarımız saklı kalmak kaydıyla, şimdilik 3.000,00 TL destekten yoksun kalma tazminatının davalıdan tahsilini talep ettiklerini, Yargıtay Genel Hukuk Kurulu açılan bir dava üzerine trafik kazasında ölen kişinin tam kusurlu olsa da yakınlarına tazminat ödenmesini kararlaştırıldığını,işbu nedenlerle, şimdilik kaza tarihinden itibaren işleyecek reeskont faizi ile birlikte 3.000,00 TL maddi tazminatın davalıdan tahsiline, yargılama giderleri ve vekalet ücretinin davalı üzerine bırakılmasına karar verilmesini iddia ve talep etmiştir.' - source_sentence: >- Davacı vekili dava dilekçesinde ÖZETLE; vekil edeninin terkin edilen ve ihyası talep edilen ...Sanayi ve Ticaret Limited Şirketi'nden alacaklı olduğunu, vekiledeni tarafından iş bu şirkete Bakırköy .... Noterliği 04/11/2015 tarih ve .... yevmiye numaralı mülkiyeti muhafaza kaydı ile satış sözleşmesi yapmak sureti ile ... plaka ... marka .... model .... cinsi... tipli menkul aracın satışının yaptığını, vekiledeninin alacağını tahsil edemeyince İstanbul ... icra Müdürlüğü ...esas sayılı dosyasından takibe girişildiğini, fiili haciz yapıldığını, ancak borçlu şirketin tasfiye edildiğinin satış aşamasından sonra icra dosyasından yapılan sorgu sonucu öğrenildiğini, şirket adresinin .... Mahallesi... Caddesi No: .... ... -İstanbul olduğunu, şirketin tüzel kişiliğinin ticaret sicilinden silinme ( terkin ) ile sona erdiğini, şirketin tasfiye dışında kalmış ... plaka sayılı aracın varlığı sabit olduğundan usulsüz olarak tasfiye edildiğini, 6335 sayılı kanun ile 6102 sayılı Türk Ticaret Kanunu’na eklenen geçici madde 7 hükmü gereğince şirket adında kayıtlı aracın satılarak paraya çevrilmesi ve alacağın tahsili için iş bu davanın açıldığını beyanla, 03-07-2017 tarihinde terkin olunan ...Sanayi ve Ticaret Limited Şirketi'nin ihyasına karar verilmesini talep ve dava etmişlerdir.DELİLLERİstanbul Ticaret Sicil Müdürlüğü yazı cevabı ve tüm dosya kapsamı.DELİLLERİN DEĞERLENDİRİLMESİ VE GEREKÇE:İş bu dava, hukukî niteliği itibariyle TTK'nun 545.ve devamı maddeleri uyarınca açılmış limited şirketin ihyası ile ticaret siciline tescili davasıdır. İstanbul Ticaret Sicil Müdürlüğü tarafından gönderilen sicil kayıtları incelendiğinde ihyası istenen şirketin terkin olmadan önce merkez adresinin .... / İstanbul olduğu, buna göre mahkememizin 6102 sayılı TTK'nun 547/1 maddesi anlamında kesin yetkili olduğu anlaşılmıştır.Somut olayda ...Sanayi ve Ticaret Limited Şirketi'nin adına kayıtlı olan ... plakalı aracın satış işleminin yapılması için ihyasının talep edildiği, İstanbul Ticaret Sicil Müdürlüğünden gönderilen sicil kayıtları incelendiğinde; 927310/0 sicil numarasında kayıtlı ...Sanayi ve Ticaret Limited Şirketi'nin tasfiye nedeniyle sicilden terkin edildiği görülmüştür. sentences: - "Her iki tarafın da\nticari işletmesiyle ilgili hususlardan doğan hukuk davaları ve çekişmesiz yargı\nişleri ile tarafların tacir olup olmadıklarına bakılmaksızın;\nBu Kanunda,\nTürk Medenî Kanununun, rehin karşılığında ödünç verme\nişi ile uğraşanlar hakkındaki 962 ilâ 969 uncu maddelerinde,\n11/1/2011 tarihli ve 6098 sayılı\nTürk Borçlar Kanununun malvarlığının veya işletmenin devralınması ile işletmelerin\nbirleşmesi ve şekil değiştirmesi hakkındaki 202 ve 203, rekabet yasağına ilişkin\n444 ve 447, yayın sözleşmesine dair 487 ilâ 501, kredi mektubu ve kredi emrini düzenleyen\n515 ilâ 519, komisyon sözleşmesine ilişkin 532 ilâ 545, ticari temsilciler, ticari\nvekiller ve diğer tacir yardımcıları için öngörülmüş bulunan 547 ilâ 554, havale\nhakkındaki 555 ilâ 560, saklama sözleşmelerini düzenleyen 561 ilâ 580 inci maddelerinde,\nFikrî mülkiyet hukukuna dair mevzuatta,\nBorsa, sergi, panayır ve pazarlar ile antrepo ve ticarete\nözgü diğer yerlere ilişkin özel hükümlerde,\nBankalara, diğer kredi kuruluşlarına, finansal kurumlara\nve ödünç para verme işlerine ilişkin düzenlemelerde, \nöngörülen hususlardan doğan hukuk davaları ve çekişmesiz\nyargı işleri ticari dava ve ticari nitelikte çekişmesiz yargı işi sayılır. Ancak,\nherhangi bir ticari işletmeyi ilgilendirmeyen havale, vedia ve fikir ve sanat eserlerine\nilişkin haklardan doğan davalar bundan istisnadır.[3]Ticari\ndavalarda da deliller ile bunların sunulması 12/1/2011 tarihli ve 6100 sayılı\nHukuk Muhakemeleri Kanunu hükümlerine tabidir; miktar veya değeri\_bir\nmilyon\_Türk lirasını geçmeyen ticari davalarda basit yargılama usulü\nuygulanır.\_\_Bu fıkrada\nbelirtilen parasal sınır, 6100 sayılı Kanunun ek 1 inci maddesinin birinci\nfıkrasına göre artırılır.[4][5]" - ' Davacı vekili dava dilekçesinde özetle: Davalı ... Mühendislik Şti ile aralarında karşılıklı ticari ilişki bulunduğunu, davalıdan alınan mallar karşılığında çek verildiğini ve davacıya verilen mallar karşılığında da davalıdan çek aldıklarını, ancak kendilerinin çeklerinin günü geldiğinde çek bedellerini ödemelerine rağmen, davalının kendilerine verdiği çeklerin günü gelip bankaya ibraz edildiğinde karşılıklarının olmadığını, karşılıksız kaldığını, buna göre hali hazırda davalıdan sadır olmuş çeklerin karşılıksız kalması nedeniyle 768.771,72 TL alacaklı olduklarını, vadesi gelmeyen 2 adet çekin de karşılıksız kalması halinde davalıdan 1.018.771,72 TL alacaklı olacaklarını, davalıya verilen 4 adet (... 05.08.2018 tarihli 100.000,00 TL, ... Bankası 04.08.2018 tarihli 100.000,00 TL, ... 05.09.2018 tarihli 250.000,00 TL ve ... 05.09.2018 tarihli 250.000,00 tamamı ileri vadeli çekten) toplam 700.000,00 TL yönünden takas-mahsup hükümleri uygulanarak borçlu olmadıklarının tespitini ve tedbir talep etmiş sonuç talep olarak da 4 adet 700.000,00 TL''lik çeklerden dolayı takas mahsup talebi ve hükümleri doğrultusunda davalıya borçlu olmadığının tespitine, çeklerin iptali ve istirdatına ilişkin talepte bulunmuştur.Davalı tarafa usulüne uygun tebliğe rağmen davaya cevap vermediği görülmüştür.' - ' Davacı vekili, dava dilekçesinde özetle; müvekkili şirket ile davalı şirket arasındaki ticari ilişkiler kapsamında edimlerin eksiksiz tamamlanıp yerine getirildiğini, ancak davalının şifahen yapılan tüm ihtarlara rağmen davalının cari hesap alacağını ödemediğini, bunun üzerine ödenmeyen cari hesap alacağının tahsili için ------- sayılı dosyasıyla icra takibine başlandığını, borçlu davalının borca itirazı ile birlikte yetki itirazında bulunuğunu, yetki itirazının taraflarınca kabul edildiğini, dosyanın yetkili olarak belirtilen ----- esas sayılı icra dosyası üzerinden davalıya tekrar ödeme emri gönderildiğini, borçlu tarafından ------- tarihli itiraz dilekçesi ile takibe konu borca itiraz edildiğini, müvekkili tarafından tutulan muavin defter kayıtlarında müvekkilinin alacağının olduğu yönünde olduğunu, ayrıca her ne kadar davalı itiraz dilekçesinde müvekkili şirket ile davalı şirket arasında herhangi bir akdi bağ bulunmadığını beyan etmiş ise de; dilekçe ekinde sunulan muavin defter kayıtlarında davalı tarafından yapılan ödemelerin açıkça gözüktüğünü, bu nedenlerle davalının --------- dosyasına yaptığı itirazın iptaline, icra inkar tazminatına hükmedilmesine karar verilmesini talep ve dava etmiştir.' - source_sentence: " davacı vekilince süresinde istinaf kanun yoluna başvurulması üzerine dosya incelendi, gereği konuşulup düşünüldü. \tDAVA\tDavacı vekili dava dilekçesinde özetle; 10.09.2018 tarihinde yapılan olağanüstü genel kurulda alınan kararla şirketin sermayesinin 85.200,00 TL daha arttırılmasına, bunun 19.676.813,95 TL'sinin iç kaynaklardan sermayeye eklenmesine, 65.523.186,05 TL'nin ise nakit olarak şirket hissedarlarının rüçhan haklarını kullanmaları suretiyle paylarına tekabül eden sermayeleri karşılığı ödenmesi gereken miktardan karşılanmasına karar verildiğini, kararın 22.11.2018 tarihinde ticaret siciline tescil edildiğini ve 27.11.2018 tarihli Türkiye Ticaret Sicil Gazetesinde ilan edildiğini, aile şirketi olan davalı şirketin çoğunluk oyuna istinaden ....tarafından hukuken ve fiilen idare edildiğini, şirket kurulduğundan bu yana hiç kar dağıtımı yapılmadığını, müvekkilinin Ankara Batı Asliye Ticaret Mahkemesi'nin .... Esas sayılı dosyasıyla şirketin feshini talep ettiğini, 2014 yılından beri 3 kez karar alınarak sermaye arttırımına gidildiğini, sermaye arttırımlarının temel nedeninin müvekkilinin şirketten çıkması halinde hissesinin azaltılması olduğunu, şirketin sermayesinin arttırılmasını gerektirir TTK'nın376. Maddesindeki sebeplerden birinin bulunmadığını, müvekkilinin önceki artırımda katılım taahhüdünde bulunamadığını, dolayısıyla şirketteki 8.400/28.000 olan hissesinin 8.400/90.000 hisseye düştüğünü, müvekkilinin bu artışla şirketteki pay oranının daha da düşeceğini, müvekkilinin sermaye artışında rüçhan hakkını kullanacak ekonomik gücünün bulunmadığını sermaye artırım kararlarının MK'nın 2. Maddesindeki dürüstlük kuralına aykırı olarak çoğunluğun azınlığı ezecek şekilde alınmasının hukuken korunamayacağını, şirketin feshi davası devam ederken, hiçbir finansal zorunluluk ve gereklilik olmadığı halde, sermaye artışına gitmekteki amacın müvekkiline zarar vermek ve onu ezmek, ortaklıktaki çoğunluğun hakimiyetini artırmak gayesini güttüğünü ileri sürerek davalı şirketin 10.09.2018 tarihinde yapılan olağanüstü genel kurulunda alınan şirketin sermayesinin 85.200,00 TL daha artırılmasına ilişkin sermaye artırım kararının feshine karar verilmesini talep ve dava etmiştir. " sentences: - ' Davacı vekili dava dilekçesinde özetle; Şirket tüzel kişiliği ve davalı ile şirket ortağı olan müvekkiller arasında gelişen olaylar ve maddi vakıalara ilişkin ayrıntılı açıklamalara ve delillere ileride yer verilecek olmakla birlikte, şirketin müdürü olarak atanması yapılan davalı ...''in birtakım hileli, haksız ve kötü niyetli eylemleri neticesinde, var olan dava süreçlerinde şirketin bekası ve ticari hayatına devam edebilmesi için her şeyden önce ve ivedilikle halen şirket hissedarı olan müvekkillerin haklarının korunması adına, şirket tarafından yapılan ve/veya yapılacak iş ve işlemler için denetim ve yönetim kayyımı atanması gerektiğini, davalı müdürün yönetmeye çalıştığı ".... Denizcilik Hiz. San. Tic. Ltd. Şti." adlı şirket 2012 yılında müvekkillerden .... ve ... ile daha önce çalışma arkadaşları oldukları ve mevcut müdür olarak görünen ...''in eşi ... ve .... tarafından kurulduğunu, dava konusu şirket liman operasyonları, boğaz operasyonları ve denizcilik sektöründe uzmanlaşmış bir denizcilik şirketi olduğunu, şirket kuruluş esas sözleşmesine göre ..., ilk 20 yıl için (2032) tek şirket müdürü seçilmiş olup, münferit imzası ile şirketin temsil ve ilzamına en geniş şekilde yetkili kılındığını, daha sonra şirketin ortaklarından ve aynı zamanda ...’in kuzeni olan ....’ye ait olan %20 oranındaki hisse, 11.12.2013 tarihinde müvekkillerin bilgisi ve onayı olmaksızın, ...''in eşi davalı ...’e devredildiğini, davalı ...''in müdür olarak atanması kararından önce ise, hissedar müvekkillerden .... ile ...''in ortaklık sıfatları devam etmesine rağmen şirketin o dönemki müdürü ve hissedarı ... tarafından, haksız ve kötü niyetli bir şekilde şirketten uzaklaştırılmaya çalışılmaları, şirketin iyi bir şekilde yönetilememesi ve dava dilekçesinde ayrıntılı olarak açıklanan diğer birçok sebeple taraflarınca şirketin feshi talebi ile bir dava ikame edildiğini, İşbu davanın Bakırköy ... Asliye Ticaret Mahkemesi''nin ... Esas sayılı dava dosyası üzerinden derdest olarak görüldüğünü, Şirketin feshi talepli davanın ikame edilmesinden önce ise hissedarlardan ...''in hayatını kaybettiğini, dosyaya taraflarınca ibraz edilen somut deliller ile haklı görülmüş ve şirket malvarlığının eksiltilmesinin önüne geçilebilmesi için şirket adına kayıtlı taşınır araç ve taşınmazların kayıtlarına tedbir konulduğunu, dava süreci devam ederken, müvekkiller ile şirket tüzel kişiliği arasında bir sulh ortamı oluştuğunu ve sulh görüşmeleri yürütülmeye başlandığını, bu sırada şirketin ana hissedarı ve imza yetkilisi ....''inde vefat ettiğini ve hisseleri eşi ... ve çocuklarına intikal ettiğini, şirketin böylece müdürsüz kaldığını, mali yükümlülüklerini ve faaliyetlerini devam ettirememe tehlikesi ile karşı karşıya kaldığını, Hemen akabinde şirketin ana hissedarı müteveffa ...'' in eşi ve mirasçısı davalı ... müdürlük sıfatını kazanması şartı ile taraflar arasındaki sulh görüşmelerini sürdüreceğini ilettiğinii, müdürlük ve imza yetkisinin kendisine verilmesi kaydıyla kendisi ile anlaşıldığını,öyle ki, sürecin hukuka uygun ve tarafların iradesini en güçlü yansıtacak şekilde yürütülmesi için, çok daha kuvvetli ve barışçıl bir çözüm yöntemi olan Avukatlık Kanunu m.35/A''ya göre bir anlaşma yapılmasında taraflarca mutabık kalındığını, akabinde, şirket vekili meslektaşın ofisinde gerek asiller (....,...., ..., ....., gerekse de taraf vekilleri (Av. ...., Av. .....) ve (şirketin muhasebe yetkilisi ....) ile 10.06.2021 tarihinde fiziken toplanıldığını, asillerin medeni bir şekilde anlaştığını ve akabinde vekiller nezdinde 35/A protokolü imzalandığını, yasal ve asgari düzenlemeler ile birlikte davalı ...''in müdür atanmasına ilişkin genel kurul toplantısında hiçbir şekilde çağrı usulüne uymadığını, bunun yanında olağanüstü olarak toplanan genel kurula tüm paydaşlar da katılmadığını, bu nedenle, alınan karar butlan olup, geçersiz olduğunu ayrıca davalı şirket müdürü tarafından son derece kötü niyetli bir şekilde sulh görüşmeleri baltalanmış olmakla birlikte bunun yanında, kötü niyetli birçok iş ve işlem de yapıldığını, şirketin yeni yöneticisi olan ...''in ve ...''in diğer mirasçılarının ise denizcilik sektörü ile ve hatta herhangi bir ticari şirket ile uzaktan yakından en ufak bir bağlantısı yahut tecrübesi bulunmadığını, gerek Türk Ticaret Kanunu''nun ana prensibi olan şirketlerin ticari hayatına devam etmesi önceliği, gerek üçüncü kişiler ve gerekse de müvekkillerin haklarının korunması yalnızca sayın mahkemece verilecek tedbir kararı ile mümkün olabileceğinden haklı sebeplerin varlığı nedeni ile öncelikle tedbiren dava dışı şirketin yapmış olduğu ve/veya yapacağı iş ve işlemlerin denetlenebilmesi ve bu tarihten sonrası için de yapılacak işlemlerin yürütülmesi için re’sen denetim ve yönetim kayyımı atanmasına, davalı ...''in müdürlük sıfatının sona erdirilmesi ve azli ile müvekkil ....''nın şirket müdürü olarak atanmasına, işbu talebimiz kabul görmez ise, mahkemenin re''sen seçeceği bir müdür yahut müdürler kurulunun şirket yönetimi için seçilmesine, yargılama giderleri ile avukatlık vekalet ücretinin davalı taraf üzerine bırakılmasına karar verilmesini talep etmiştir.' - >- 446 ncı maddede belirtilen kişiler, kanun veya esas sözleşme hükümlerine ve özellikle dürüstlük kuralına aykırı olan genel kurul kararları aleyhine, karar tarihinden itibaren üç ay içinde, şirket merkezinin bulunduğu yerdeki asliye ticaret mahkemesinde iptal davası açabilirler. - >- Davacı vekili dava dilekçesinde özetle;------Tedavi masraflarının birden fazla sigortası tarafından temin edilmiş olması halinde, bu masraflar sigortacılar arasında teminatları oranının paylaştırılır" denildiğini, sigortalı dava dışı ---- tedavisine ilişkin ---- fatura ile hastaneye provizyon verilerek yapılan ödemenin --- sigortalı dava dışı ------- tarihli fatura ile hastaneye provizyon verilerek yapılan ödemenin -------sigortalı dava dışı ---- ilişkin ---- fatura ile hastaneye provizyon verilerek yapılan ödemenin ------- tarihli fatura ile hastaneye provizyon verilerek yapılan ödemenin ----sigortalı dava dışı ---- tarihli fatura ile hastaneye provizyon verilerek yapılan ödemenin --- sigortalı dava dışı ---- tedavisine ilişkin --- fatura ile hastaneye provizyon verilerek yapılan ödemenin --- sigortalı dava dışı ---- tedavisine ilişkin ----tarihli fatura ile hastaneye provizyon verilerek yapılan ödemenin---- olmak üzere, toplam ------- alacağın ödeme tarihinden itibaren işleyecek avans faizi ile birlikte tahsilini, yargılama giderleri ile vekalet ücretlerinin davalı tarafından tahmiline karar verilmesini talep ve dava etmiştir. - source_sentence: >- Davacı vekili dava dilekçesinde özetle; müvekkilinin, İstanbul Ticaret Sicil Müdürlüğünde ... sicil no ile kayıtlı...A.Ş.'de %10 oranında hisse sahibi olduğunu, davalılardan ...'un ise şirketin kuruluşundan itibaren yönetim kurulu başkanlığı görevini yaptığını, 2014,2015 ve 2016 yıllarına ait genel kurul toplantılarının yapılmadığını, kar dağıtımının da yapılmadığını, 2014.2015 ve 2016 yıllara ait olağan genel kurul toplantılarının 13/03/2018 tarihinde ertelemeli olarak yapıldığını, davalı ...'un genel kurulun iznini almadan 22/01/2018 tarihinde U... .Tic. A.Ş. adında yeni bir şirket kurduğunu ve bu şirket adına işlem yaparak müvekkilinin ortak olduğu şirketin tüm iş bağlantılarını bu şirkete aktardığını, TTK 396. maddesine aykırı hareket ettiğini, davalının ortağı olduğu.. A.Ş. adında bir şirketi daha bulunduğunu, müvekkilinin ortağı bulunduğu ... A.Ş.'den davalının ortağı olduğu ... A.Ş.'ye örtülü sermaye transferi yapıldığını, müvekkilinin ortağı olduğu şirketin içinin boşaltıldığını belirterek fazlaya ilişkin haklarının saklı kalması kaydıyla 50.000,00 TL maddi tazminat ile 100.000,00 TL manevi tazminatın davalı ...'dan tahsiline karar verilmesini ve ayrıca, davalının mal varlığını elden çıkarabileceği, dava sonucunda müvekkili lehine hükmedilecek alacağın elde edilme ihtimalinin ortadan kalkacağı gerekçesiyle, dava sonuçlanıncaya kadar davalı ...'un banka hesapları üzerine ihtiyaten tedbir konulmasına karar verilmesini talep ve dava etmiştir. sentences: - >- Davacı vekili dava dilekçesinde özetle; takibe konu olan bu çek dahil toplam 24 adet çek davacı müvekkilin keşide ettiği ... Ltd. Şti. tarafından ... Servisi AŞ (... ) aracılığıyla ... AŞ'ye emrine ciro edilip iletilmek üzere gönderildiğini, ... Kargo'nun ... Şubesinde meydana gelen şüpheli bir hırsızlık sonucunda söz konusu çekler zayi olduğunu, dava dışı ... Ltd. Şti derhal TTK m. 757 vd. uyarınca ... 3. Asliye Ticaret Mahkemesi'nde ... E. sayılı başvuruyu yaparak zayi olması nedeniyle çek iptali davası açtığını, ... 3. Asliye Ticaret Mahkemesi 07.10.2022 tarihli ara kararla 24 adet çek hakkında tedbiren ödeme yasağı kararı verildiğini, Ödeme yasağı kararı verilen çekler arasında takibe konu çekin de olduğunu, davaya konu çalıntı çekin vadesi geldiği için 3. kötü niyetli kişiler aracılığıyla takibe konulduğunu, Müvekkil yetkili hamil sıfatına haiz olmayan kötü niyetle iktisap sahibi takip alacaklısı ...'e karşı borçlu olmadığını, ... A.Ş. yetkililerine ait imzaların sahte olduğunu, .... AŞ emrine düzenlenen çekler çalındığı için hiç bir zaman eline geçmediğini, çekteki kaşenin de sahte olduğunu, kaşe üzerinde ... AŞ unvanının yazılışı aynen Ticaret Sicil'deki yazılışı gibidir: Sanayii kelimesi gerçek kaşede aynen yazılı olduğunu, Takibe konulan çekteki sahte kaşede ise unvanın, sanayi olarak hatalı şekilde tek "İ" ile yazıldığını, yine gerçek kaşede ... üst satırda yazılırıken unvanın devamı alt satırda yazılı olduğunu, Sahte kaşe de ise ilk satırda ...Sanayi yazıldığını, unvanın devamının alt satırda yer aldığını, Müvekkilin ve çeki ciro ve devir etmiş görünen davacı ... AŞ'nin çeki ondan ciro ve devir almış görünen... Şti. ile herhangi bir ticari ilişkisi olmadığını, ... Ltd. Şti. ve yetkilisi ... bu çekleri nasıl ve ne şekilde ele geçirdiğini mahkemeye açıklamak durumunda olduğunu, .... Ltd. Şti. çeki davacıların zararına işbirliği içinde hareket eden sözde iyi niyetli hamil görüntüsü çizmek için kendi şirket yetkilisi davalı ...'ya ciro ettiğini, kargodan çalınan bu çekleri ele geçirerek kötü niyetle cirolayıp icra takibi başlatan bu farklı şirketlerin kuruluş tarihi, sermayesi ve ortaklık yapısı incelendiğinde ortada bambaşka bir tablonun olduğunu, yetkili hamil görünen davalı ... tıpkı kendisi gibi kötü niyetli ve ağır kusurlu ...'dan ciro ve devir aldığı çekin aslında en baştan beri çalıntı olduğunu ve sahte imzayla devir ve ciro edildiğini bildiği ya da en azından bankaya ibraz ettiğinde ödeme yasağı öğrendiği halde dönüp kendisine ciro edenden bedelini talep ve çeki ona iade edebilecekken bunun yerine ağır kusurlu ve kötü niyetle icra takibine giriştiğini, çek görüntüsündeki ciro silsilesinden de açıkça görüleceği üzere keşideci müvekkil ... muhatabı ... A.Ş. ... Şubesi olan ... Ltd. Şti. 'nin emrine düzenlenen, ... hesap nolu, 11/11/2022 ödeme tarihli, ... seri nolu, 89.000 TL bedelli çeki, (dava konusu çek) davalı ...öncelikle ... Bankası AŞ'ye ibraz ettiğini, çek hakkında ödeme yasağı kararı verildiğinin kendisine bildirilmesi üzerine dönüp yaşamın olağan akışına uygun bir biçimde çeki kendisine ciro ve devir edenden talepte bulunmak yerine ... 14. İcra Müdürlüğü ... sayılı dosyasından davacı müvekkil ve dava dışı ... ve ... şirketi dahil cirosu bulunan herkese karşı takibe giriştiğini, yetkili hamil davalı ... bir an için çeki bankaya sunduğu aşamaya kadar iyi niyetli olduğu düşünülse bile ödeme yasağı kararı verildiğini öğrenmesinden itibaren artık basiretli tacir gibi davranıp en azından çalıntı olan ve sahte ciro nedeniyle ödeme yasağı bulunan çekten kaynaklı haklarını geriye doğru kendisine ilk ciro ve devredenden talep etmesi gerekirken bunu yapmadığını, sözde yetkili hamil de onlarla birlikte hareket ederek çalıntı ve sahte imzalı çeki bankaya sunmak ve takibe koymakla hem kötü niyetli davranışmıştır aynı zamanda da ağır kusurlu olduğunu, Müvekkilin yetkili olmayan ve çeki kötü niyetle iktisap eden davalı ...'e herhangi bir borcu olmadığını, davalılardan ...Ticaret Limited Şirketi hakkında ... Cumhuriyet Başsavcılığınca başlatılan 21.11.2022 kayıt tarihli soruşturmaya ait dosyanın davamız ile birebir aynı olması da kötü niyet olgusunun somutlaştığını gösterdiğini, davalıların suç işlemek amacıyla örgüt kurma maksadıyla aynı eylem birliği içerisinde hareket ettiğini kanıtlayan planlı eylemleri, bu kargo hırsızlığı olaylarının alışkanlık haline getirildiğini, sistematik olarak tekrarlandığını ve en nihayetinde davalıların kötü niyetini gün yüzüne çıkarttığını, dava dışı ... Şirketi ile dava dışı ...Tic. Ltd. Şti. arasında ticari ilişki bulunduğunu, bu ticari ilişki çerçevesinde toplam 3 adet çek dava dışı... A.Ş tarafından keşide edilmiş ve tarihler 27/02/2022 yi gösterdiğinde ... Kargo ... şubesi tarafından dağıtıma çıktığını, kargo yola çıkmış ancak araç yoldayken meydana gelen hırsızlık olayı sonucunda gönderime çıkan 3 adet çek çalındığını, .... Şubesi yetkilisi..., ... Polis Amirliğine müracaatta bulunarak müşteki sıfatıyla beyanda bulunduğunu, Kargo yoluyla gönderilmekte olan çeklerin gönderim sırasında çalınması nedeniyle söz konusu çek ... yetkililerinin zilyetliğine geçmediğini, Şüpheliler yine aynı senaryo ile adı geçen şirketin (... A.Ş) bilgilerini kullanarak usulsüz şekilde sahte kaşe oluşturup sahte imza ile çekleri tedavüle çıkarıldığını, çekler ... 14. İcra Müdürlüğü'nün ... E., ... E., sayısına kayıtlı olarak takibe konulduğunu, davalıların sürekli olarak aynı avukat ile aynı icra müdürlüğünde benzer çok sayıda dosyasının bulunması da yine kötü niyetin gün yüzüne çıktığının açık bir tezahürü olduğunu, dava konusu çekin çalıntı ve ... AŞ'nin imzasının ve kaşesinin sahte olması davalıların bunu bilerek kötü niyetle eylem ve işbirliği içinde hareket ettikleri, müvekkilin kötü niyetli iktisap eden davalı ve takip alacaklısı ...'e karşı herhangi bir borcunun bulunmadığının sabit olduğu nazara alınarak İİK m. 72/3 f. uyarınca takip borçlusu gecikmeden doğan zararları karşılamak ve alacağın yüzde on beşinden aşağı olmamak üzere göstereceği teminat karşılığında, mahkemeden ihtiyati tedbir yoluyla icra veznesindeki paranın alacaklıya verilmemesi için ihtiyati tedbir kararı verilmesine, davanın ... 16. Asliye Ticaret Mahkemesi ... E. sayılı dosyası ile açılan menfi tespit davası ile birleştirilmesine, davanın kabulü ile dava konusu takibe konu çekten dolayı davacının, davalılara borçlu olmadığının tespitine, ... 14. İcra Müdürlüğü ... sayılı dosyası ile başlatılan takibin iptaline, davalıların haksız ve kötüniyetli olması nedeniyle asıl alacak miktarının %20′sinden aşağı olmamak üzere %100 kötü niyet tazminata hükmedilmesine, yargılama gideri ve vekâlet ücretinin davalılar yüklenmesine karar verilmesini talep ve dava etmiştir. - ' Davacı vekili dava dilekçesinde özetle; 28/07/2018 tarihinde davacı sigorta şirketine Genişletilmiş Kasko Poliçesi ile sigortalı olan ... plakalı aracın park halinde iken yanında duran binanın beton ve sıva parçalarının düşmesi neticesinde maddi hasara uğradığını, yaptırılan ekspertiz incelemesi neticesinde araçta sigorta tenzil ve muafiyet bedelleri düşüldükten sonra belirlenen 14.785 TL''nin sigortalıya ödendiğini, bu nedenlerle fazlaya ilişkin hakları saklı kalmak kaydıyla davanın kabulü ile, 7.231,85 TL tutarındaki alacak için ödeme yapılan 22/10/2018 tarihinden, 6.599,99 TL tutarındaki alacak için ödeme yapılan 22/10/2018 tarihinden itibaren, 953 TL tutarındaki alacak için ödeme yapılan 10/12/2018 tarihinden itibaren işleyecek T.C.Merkez Bankasının Kısa Vadeli Kredilere uyguladığı avans faizi oranında faiz, yargılama gideri ve vekalet ücreti ile birlikte davalıdan tahsiline karar verilmesini talep ve dava etmiştir. ' - >- Davacı vekili dava dilekçesinde özetle; müvekkili şirketin turizm işletmeciliği alanında faaliyet gösterdiğini ve borca batık hale geldiğini, 6102 sayılı TTK 'nun 377 maddesi "yönetim kurulu veya herhangi bir alacak yeni nakit sermaye konulması dahil nesnel ve gerçek kaynakları ve önlemleri gösteren bir iyileştirme projesini mahkemeye sunarak iflasın ertelenmesini isteyebilir. Bu halde icra ve Kanunun 179 ila 179/b maddeleri uygulanır " hükmünü içerdiğini, bu hüküm ggreğince iflas erteleme dava dosyasının mahkemeye sunulmasıyla birlikte, tedbir kararı verilebildiğini, bu nedenle tedbir talep ettiklerini belirterek davalarının kabulü ile davacı şirketlerin borca batık olduğunun tespiti ile İİK madde 179 ve ilgili mevzuat gereği iflasının şimdilik 1 yıl süre ile ertelenmesine, İİK madde 179/a gereğince davacı şirketlerin mal varlığının korunması için gerekli muhafaza tedbirlerinin alınmasını, davacı şirketlerin aktifinde kayıtlı bulunan nakil vasıtaların ve aktiflerinin devir ve satış ve muhafazasının engellenmesi ile ilgili trafik şubesine yazı yazılmasına, aktifinde kayıtlı bulunan demirbaşlar, emtia ve diğer araçları, bankalardaki mevduatlara konulacak muhafaza tedbirlerinin durdurulmasına, İİK madde 179/b gereği iflasın ertelenmesi kararı ile birlikte davacı şirketler aleyhine 6183 sayılı yasaya ve ------- ya göre yapılan takipler de dahil olmak üzere davacı şirketler aleyhine yapılmış her türlü icra takibinin ve iflas takibinin durdurulması ve yeni takip yapılmasının engellenmesine, ihtiyati haciz kararlarının uygulanmasının önlenmesine, rehinin paraya çevrilmesi yoluyla yapılmış ve yapılacak takiplerle satışların durdurulmasına, davacı şirketler aleyhine yapılmış ve yapılacak her türlü muhafaza, teslim ve tahliyyeye dair icra işlemlerin durdurulmasına, muhafaza altına alınmış veya alınacak emtia, taşıt, makine teçhizat, leasing kapsamı tüm makine, cihaz, taşıt vs. değerlerlerin iade edilmesine, şirketlerin projesinin hayata geçirilmesi için zorunlu olan elektrik, doğalgaz, su ve sabit telefonlarının kesilmemesine, yurt dışından gelen hizmet bedellerinin ( akreditifin yahut sair şekilde ) bankalarca el konulmasının engellenmesine, davacı şirketlerin temsil ve ilzam yetkilerini aynen devam ettirebilmek için müvekkili şirkete kayyım atanımasına, sermaye artışı, alacakların tahsili, tasarruf tedbirleri ve faaliyetlerin sürdürebilmesi suretiyle borca batıklıktan kurtulabileceğini ileri sürerek iflaslarının bir yıl süre ile ertelenmesine karar verilmesini talep ve dava etmiştir. - source_sentence: >- Davacı vekili dava dilekçesinde özetle; müvekkillerden ... AŞ. nin Tekstil, Matbaa, Hizmet ve İnşaat sektörlerinde faaliyet gösteren şirketlerde ortaklığı bulunan ve bu şirketlerin faaliyederi neticesinde elde edilen kan ortaklatma dağıtmayı amaçlayan bir yatırı şirketi olduğunu, müvekkili şirket 2006 yılında Türkiye' de hareketlenen İnşaat sektöründe yer almak amacıyla araştırmalar yaptığını, sektörde birlikte yol alabileceği kişi ve şirketleri bir araya getirerek 2006 yılında kurulan ... AŞ.' nin kuruluşuma önayak olduğunu, 2008 yılında yapılan 2006-2007 yıllanna ait ekli Genel Kurul Toplantı tutanağı ve hazirun cetveline göre şirket ortaklannın ... AŞ., ... , ... AŞ., ... ve şehir planlayıcısı ... olduğunu, şirketin kuruluş amacı doğrultusunda sermayelerini bir araya getiren ... ve ... dava dışı ... Sanayi AŞ. adına kayıtlı bulunan 14 dönümlük bir araziyi almak için protokol imzaladığını, imzalanan protokol neticesinde satış sözleşmesine konu taşınmaz alımı için protokolde belirlenen %75 lik tutar şirket sermayesinden karşılanmak sureti ile dava dışı şirkete kapora verildiğini, ancak söz konusu arealann başka kişilere satıldığını, davalı ... AŞ. vermiş olduğu parayı alabilmesi için yapılan yargılama neticesinde dosyadan elde edilen kök ve ek bilirkişi raporu sonucunda dava dışı şirketin davalı şirkete 7.930.000,00.-TL borçlu olduğunun tespit edildiğini, davalı şirketin kurulduğu günden bu yana geçen zaman zarfında bir kısım faaliyetlerde bulunmuş ise de uzun zamandır gayri faal durumda olduğunu, ticaret sicilde yer alan adresinde de bulunmadığım, davalı şirketin 2011 ve 2012 yılı hesap dönemine ilişkin yapılacak olan Olağan Genel Kurul Toplantısının davalı şirketin alacaklısı olduğu ... San. AŞ, nin adresinde yapılacağının açıklandığını, şirketin merkezi yerine Olağan Genel Kurul Toplantılarını şirket sermayesinin yansından fazlasını alacaklı olduğu borçlusunun adresinde yapılmak istenmesinin müvekkillerinin ortaklık haklanna zarar verme kastı içerisinde Yönetim Kurulu Üyelerinin birlikte hareket ettiğinin göstergesi olduğunu, 28/10/2013 tarihli toplantıda müvekkillerinin ortağı olduğu davalı şirketin Yönetim Kurulu Üyesi ve imza yetkisi olan ... şirket sahibi olduğu hisselerin neredeyse tamamını dava dışı ... San. AŞ. ye satarak devretmiş bulunduğunu, davalı şirketin uzun zamandır gayri faal olduğunu ve dava dışı şirketten ... Asliye Ticaret Mahkemesinin ... E. sayılı dosyasından alınan raporu ile faizleri ile birlikte 13.000.000,00.-TL alacaklı olduğunu beyanla neticeten davanın esasına ilişkin ihdas edilene kadar ihtiyati tedbir karan verilmesi suretiyle Şirket Yönetim Kurulu yerine görev yapmak ya da yönetim kurulu üyelerinin kararlannı denetlemek üzere kayyum atanmasına; davalı ... AŞ nin gerek gayri faal olması, gerek son yaşanan hisse devirleri İle 15/09/2008 tarihi itibariyle 7.930.670,00.-TL alacaklı olduğu şirketin çoğunluk hisselerini ele geçirmesi neticesinde bahse konu alacağının tahsilinin imkansız hale gelmesi sebebiyle TTK 531. maddesi hükümleri uyarınca müvekkillerin ticari ortaklığa devam etmemekte hukuki ve ticari menfaaderinin varlığı gözetilerek feshine karar verilmesine, yargılama giderleri ile ücreti vekaletin karşı tarafa yükletilmesine karar verilmesini talep ve dava etmiştir. sentences: - >- Her tacir, ticari defterleri tutmak ve defterlerinde, ticari işlemleriyle ticari işletmesinin iktisadi ve mali durumunu, borç ve alacak ilişkilerini ve her hesap dönemi içinde elde edilen neticeleri, bu Kanuna göre açıkça görülebilir bir şekilde ortaya koymak zorundadır. Defterler, üçüncü kişi uzmanlara, makul bir süre içinde yapacakları incelemede işletmenin faaliyetleri ve finansal durumu hakkında fikir verebilecek şekilde tutulur. İşletme faaliyetlerinin oluşumu ve gelişmesi defterlerden izlenebilmelidir.Tacir, işletmesiyle ilgili olarak gönderilmiş bulunan her türlü belgenin, fotokopi, karbonlu kopya, mikrofiş, bilgisayar kaydı veya benzer şekildeki bir kopyasını, yazılı, görsel veya elektronik ortamda saklamakla yükümlüdür.Fiziki ortamda tutulan yevmiye defteri, defteri kebir ve envanter defteri ile dördüncü fıkrada sayılan defterlerin açılış onayları, kuruluş sırasında ve kullanılmaya başlanmadan önce noter tarafından yapılır. Bu defterlerin izleyen faaliyet dönemlerindeki açılış onayları, defterlerin kullanılacağı faaliyet döneminin ilk ayından önceki ayın sonuna kadar notere yaptırılır. Pay defteri ile genel kurul toplantı ve müzakere defteri yeterli yaprakları bulunmak kaydıyla izleyen faaliyet dönemlerinde de açılış onayı yaptırılmaksızın kullanılmaya devam edilebilir. Yevmiye defterinin kapanış onayı, izleyen faaliyet döneminin altıncı ayının sonuna kadar, yönetim kurulu karar defterinin kapanış onayı ise izleyen faaliyet döneminin birinci ayının sonuna kadar notere yaptırılır. (…) Açılış onayının noter tarafından yapıldığı hâllerde noter, ticaret sicili tasdiknamesini aramak zorundadır. Ancak anonim ve limited şirketlerin ticaret siciline tescili sırasında defterlerin açılış onayları ticaret sicili müdürlükleri tarafından yapılır. Ticari defterlerin elektronik ortamda tutulması hâlinde bu defterlerin açılışlarında ve yevmiye defteri ile yönetim kurulu karar defterinin kapanışında noter veya ticaret sicili müdürlüğü onayı aranmaz. Fiziki ortamda veya elektronik ortamda tutulan ticari defterlerin nasıl tutulacağı, defterlere kayıt zamanı, onay yenileme ile açılış ve kapanış onaylarının şekli ve esasları Gümrük ve Ticaret Bakanlığı ile Maliye Bakanlığınca müştereken çıkarılan tebliğle belirlenir.[18]Pay defteri, yönetim kurulu karar defteri ve genel kurul toplantı ve müzakere defteri gibi işletmenin muhasebesiyle ilgili olmayan defterler de ticari defterlerdir. (Ek cümleler:27/12/2020-7262/27 md.) Ticaret Bakanlığı, pay defteri, yönetim kurulu karar defteri ile genel kurul toplantı ve müzakere defterinin elektronik ortamda tutulmasını zorunlu kılabilir. Sermaye Piyasası Kanunu hükümleri saklıdır.Bu Kanuna tabi gerçek ve tüzel kişiler, 4/1/1961 tarihli ve 213 sayılı Vergi Usul Kanununun defter tutma ve kayıt zamanıyla ilgili hükümleri ile aynı Kanunun 175 inci ve mükerrer 257 nci maddelerinde yer alan yetkiye istinaden yapılan düzenlemelere uymak zorundadır. Bu Kanunun defter tutma, envanter, mali tabloların düzenlenmesi, aktifleştirme, karşılıklar, hesaplar, değerleme, saklama ve ibraz hükümleri 213 sayılı Kanun ile diğer vergi kanunlarının aynı hususları düzenleyen hükümlerinin uygulanmasına, vergi kanunlarına uygun olarak vergi matrahının tespit edilmesine ve buna yönelik mali tabloların hazırlanmasına engel teşkil etmez. - "DAVACI \t: ... - ... [25959-91640-25960] UETSVEKİLİ\t: Av. ... - [16449-44688-49007] UETSDAVALI \t: ... - T.C.N. ... ...VEKİLİ\t: Av. ... - [16000-00988-90203] UETSDAVA\t: Tazminat (Ticari Nitelikteki Hizmet Sözleşmesinden Kaynaklanan)DAVA TARİHİ\t: 20/10/2021KARAR TARİHİ\t: 12/04/2022KARAR YAZIM TARİHİ \t: 14/04/2022Mahkememizde görülmekte olan Tazminat (Ticari Nitelikteki Hizmet Sözleşmesinden Kaynaklanan) davasının yapılan açık yargılaması sonunda,GEREĞİ DÜŞÜNÜLDÜ:İDDİA VE SAVUNMA:Davacı vekili dava dilekçesinde özetle: Davalı ...'ın davacı şirkette 05.01.2017 tarihinde proje mühendisi olarak çalışmaya başlamış,14.07.2021 tarihinde iş akdinin sona erdirildiğini, davalı ile akdedilen iş sözleşmesinde “ Rekabet Yasağı ve Cezai Şartı”nda hüküm altına alındığını, davalının sözleşmeye uymayan şekilde ... isimli firmada çalışmaya başladığını, böylece hizmet sözleşmesine konulan rekabet yasağının davalı tarafça ihlal edildiğini ve rakip firmada ticari bilgi ve sırları hukuka aykırı şekilde kullandığını öne sürerek, şimdilik 10.000,00 TL tazminat ödenmesine karar verilmesini talep ve dava etmiştir. " - >- Davacı vekili dava dilekçesinde özetle; müvekkili ile davalı arasındaki ticari ilişki söz konusu olduğunu, davacı tarafından faturaya konu ürün ve malzeme satışı yapıldığını, bu mallara ilişkin faturaların tanzim edildiğini, müvekkilince tanzim olunan faturalara davalının itirazının bulunmadığını, cari hesaba ilişkin olarak davalının ödeme yapmaması üzerine aleyhine ... Müdürlüğü’nün ... sayılı dosyası üzerinden yasal takip başlatıldığını, yapılan takibe davalıca itiraz edilmesi üzerine takibin durdurulduğunu, arabuluculuk görüşmelerinde de tarafların anlaşma sağlayamadıklarından bahisle; davalarının kabulü ile borçlu davalının icra takibine yaptığı itirazın iptalini, takibin devamını, davalı aleyhine takip konusu alacağın %20’den az olmamak üzere icra inkar tazminatına hükmedilmesini, yargılama giderleri ile vekalet ücretinin davalı yan üzerine bırakılmasını vekaleten arz ve talep etmiştir. model-index: - name: SentenceTransformer results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts dev type: sts-dev metrics: - type: pearson_cosine value: 0.19373258731869963 name: Pearson Cosine - type: spearman_cosine value: 0.24307341815427166 name: Spearman Cosine - type: pearson_manhattan value: 0.2245827911400446 name: Pearson Manhattan - type: spearman_manhattan value: 0.2468102784042943 name: Spearman Manhattan - type: pearson_euclidean value: 0.22537635202224982 name: Pearson Euclidean - type: spearman_euclidean value: 0.24695143686545143 name: Spearman Euclidean - type: pearson_dot value: 0.18775862207030505 name: Pearson Dot - type: spearman_dot value: 0.2124049530103558 name: Spearman Dot - type: pearson_max value: 0.22537635202224982 name: Pearson Max - type: spearman_max value: 0.24695143686545143 name: Spearman Max license: apache-2.0 base_model: - dbmdz/bert-base-turkish-cased --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained on the [MesutDemirel/legal_nli_tr_v1](https://huggingface.co/datasets/MesutDemirel/legal_nli_tr_v1) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - [MesutDemirel/legal_nli_tr_v1](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1) <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ "Davacı vekili dava dilekçesinde özetle; müvekkillerden ... AŞ. nin Tekstil, Matbaa, Hizmet ve İnşaat sektörlerinde faaliyet gösteren şirketlerde ortaklığı bulunan ve bu şirketlerin faaliyederi neticesinde elde edilen kan ortaklatma dağıtmayı amaçlayan bir yatırı şirketi olduğunu, müvekkili şirket 2006 yılında Türkiye' de hareketlenen İnşaat sektöründe yer almak amacıyla araştırmalar yaptığını, sektörde birlikte yol alabileceği kişi ve şirketleri bir araya getirerek 2006 yılında kurulan ... AŞ.' nin kuruluşuma önayak olduğunu, 2008 yılında yapılan 2006-2007 yıllanna ait ekli Genel Kurul Toplantı tutanağı ve hazirun cetveline göre şirket ortaklannın ... AŞ., ... , ... AŞ., ... ve şehir planlayıcısı ... olduğunu, şirketin kuruluş amacı doğrultusunda sermayelerini bir araya getiren ... ve ... dava dışı ... Sanayi AŞ. adına kayıtlı bulunan 14 dönümlük bir araziyi almak için protokol imzaladığını, imzalanan protokol neticesinde satış sözleşmesine konu taşınmaz alımı için protokolde belirlenen %75 lik tutar şirket sermayesinden karşılanmak sureti ile dava dışı şirkete kapora verildiğini, ancak söz konusu arealann başka kişilere satıldığını, davalı ... AŞ. vermiş olduğu parayı alabilmesi için yapılan yargılama neticesinde dosyadan elde edilen kök ve ek bilirkişi raporu sonucunda dava dışı şirketin davalı şirkete 7.930.000,00.-TL borçlu olduğunun tespit edildiğini, davalı şirketin kurulduğu günden bu yana geçen zaman zarfında bir kısım faaliyetlerde bulunmuş ise de uzun zamandır gayri faal durumda olduğunu, ticaret sicilde yer alan adresinde de bulunmadığım, davalı şirketin 2011 ve 2012 yılı hesap dönemine ilişkin yapılacak olan Olağan Genel Kurul Toplantısının davalı şirketin alacaklısı olduğu ... San. AŞ, nin adresinde yapılacağının açıklandığını, şirketin merkezi yerine Olağan Genel Kurul Toplantılarını şirket sermayesinin yansından fazlasını alacaklı olduğu borçlusunun adresinde yapılmak istenmesinin müvekkillerinin ortaklık haklanna zarar verme kastı içerisinde Yönetim Kurulu Üyelerinin birlikte hareket ettiğinin göstergesi olduğunu, 28/10/2013 tarihli toplantıda müvekkillerinin ortağı olduğu davalı şirketin Yönetim Kurulu Üyesi ve imza yetkisi olan ... şirket sahibi olduğu hisselerin neredeyse tamamını dava dışı ... San. AŞ. ye satarak devretmiş bulunduğunu, davalı şirketin uzun zamandır gayri faal olduğunu ve dava dışı şirketten ... Asliye Ticaret Mahkemesinin ... E. sayılı dosyasından alınan raporu ile faizleri ile birlikte 13.000.000,00.-TL alacaklı olduğunu beyanla neticeten davanın esasına ilişkin ihdas edilene kadar ihtiyati tedbir karan verilmesi suretiyle Şirket Yönetim Kurulu yerine görev yapmak ya da yönetim kurulu üyelerinin kararlannı denetlemek üzere kayyum atanmasına; davalı ... AŞ nin gerek gayri faal olması, gerek son yaşanan hisse devirleri İle 15/09/2008 tarihi itibariyle 7.930.670,00.-TL alacaklı olduğu şirketin çoğunluk hisselerini ele geçirmesi neticesinde bahse konu alacağının tahsilinin imkansız hale gelmesi sebebiyle TTK 531. maddesi hükümleri uyarınca müvekkillerin ticari ortaklığa devam etmemekte hukuki ve ticari menfaaderinin varlığı gözetilerek feshine karar verilmesine, yargılama giderleri ile ücreti vekaletin karşı tarafa yükletilmesine karar verilmesini talep ve dava etmiştir. ", 'Davacı vekili dava dilekçesinde özetle; müvekkili ile davalı arasındaki ticari ilişki söz konusu olduğunu, davacı tarafından faturaya konu ürün ve malzeme satışı yapıldığını, bu mallara ilişkin faturaların tanzim edildiğini, müvekkilince tanzim olunan faturalara davalının itirazının bulunmadığını, cari hesaba ilişkin olarak davalının ödeme yapmaması üzerine aleyhine ... Müdürlüğü’nün ... sayılı dosyası üzerinden yasal takip başlatıldığını, yapılan takibe davalıca itiraz edilmesi üzerine takibin durdurulduğunu, arabuluculuk görüşmelerinde de tarafların anlaşma sağlayamadıklarından bahisle; davalarının kabulü ile borçlu davalının icra takibine yaptığı itirazın iptalini, takibin devamını, davalı aleyhine takip konusu alacağın %20’den az olmamak üzere icra inkar tazminatına hükmedilmesini, yargılama giderleri ile vekalet ücretinin davalı yan üzerine bırakılmasını vekaleten arz ve talep etmiştir. ', "DAVACI \t: ... - ... [25959-91640-25960] UETSVEKİLİ\t: Av. ... - [16449-44688-49007] UETSDAVALI \t: ... - T.C.N. ... ...VEKİLİ\t: Av. ... - [16000-00988-90203] UETSDAVA\t: Tazminat (Ticari Nitelikteki Hizmet Sözleşmesinden Kaynaklanan)DAVA TARİHİ\t: 20/10/2021KARAR TARİHİ\t: 12/04/2022KARAR YAZIM TARİHİ \t: 14/04/2022Mahkememizde görülmekte olan Tazminat (Ticari Nitelikteki Hizmet Sözleşmesinden Kaynaklanan) davasının yapılan açık yargılaması sonunda,GEREĞİ DÜŞÜNÜLDÜ:İDDİA VE SAVUNMA:Davacı vekili dava dilekçesinde özetle: Davalı ...'ın davacı şirkette 05.01.2017 tarihinde proje mühendisi olarak çalışmaya başlamış,14.07.2021 tarihinde iş akdinin sona erdirildiğini, davalı ile akdedilen iş sözleşmesinde “ Rekabet Yasağı ve Cezai Şartı”nda hüküm altına alındığını, davalının sözleşmeye uymayan şekilde ... isimli firmada çalışmaya başladığını, böylece hizmet sözleşmesine konulan rekabet yasağının davalı tarafça ihlal edildiğini ve rakip firmada ticari bilgi ve sırları hukuka aykırı şekilde kullandığını öne sürerek, şimdilik 10.000,00 TL tazminat ödenmesine karar verilmesini talep ve dava etmiştir. ", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.1937 | | **spearman_cosine** | **0.2431** | | pearson_manhattan | 0.2246 | | spearman_manhattan | 0.2468 | | pearson_euclidean | 0.2254 | | spearman_euclidean | 0.247 | | pearson_dot | 0.1878 | | spearman_dot | 0.2124 | | pearson_max | 0.2254 | | spearman_max | 0.247 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### MesutDemirel/legal_nli_tr_v1 * Dataset: [MesutDemirel/legal_nli_tr_v1](https://huggingface.co/datasets/MesutDemirel/legal_nli_tr_v1) at [7f0c3ba](https://huggingface.co/datasets/MesutDemirel/legal_nli_tr_v1/tree/7f0c3bade4d136eb7fbd18b470a5a8b2f173569b) * Size: 202,000 training samples * Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | premise | hypothesis | label | |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 10 tokens</li><li>mean: 290.07 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 275.8 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>0: ~42.80%</li><li>1: ~39.10%</li><li>2: ~18.10%</li></ul> | * Samples: | premise | hypothesis | label | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Davacı vekili dava dilekçesinde özetle; müvekkili şirketin ... Başkanlığına kayıtlı olarak faaliyet gösterdiğini, müvekkili şirket yetkilisi 17/06/2022 tarihinde şirketle ilgili yapılacak iş ve işlemlerle ilgili karar defterini aradığında bulamadığını, karar defterinin kayıp mı yoksa çalınmış mı olduğundan da emin olamadıklarını, ancak tüm aramalara rağmen karar defterini bulamadıklarını, karar defterinin bulunamamış olmasından ötürü zayi olduğu sonucuna varıldığını, zayi olan şirket karar defterine ilişkin zayi belgesinin verilmesi talep etmiştir. </code> | <code>Davacı vekili dava dilekçesinde özetle; 15.04.2022 tarihinde şirket binasında yetkilisi olduğu şirkete ait karar defterinin noterde işlem yapılacağı esnada kaybolduğunu, tüm aramalara rağmen bulunamadığını, yetkilisi olduğu şirkete ait karar defterinin zayi olduğunu belirterek şirkete ait karar defteri, tarafınca tüm dikkat ve özen yükümlülüklerine riayet edilmesine rağmen kaybolduğundan şirket karar defterinin tespit ve zayi belgesinin verilmesi talep etmiştir. </code> | <code>0</code> | | <code> Davacı vekili dava dilekçesinde özetle, müvekkilinin alacaklısı olduğu İstanbul 3. İcra Müdürlüğü'nün -------- Esas sayılı dosyası ve müvekkilinin davalısı olduğu tasfiye olan şirket tarafından ikame olunan ve halen derdest olan İstanbul 21. İcra Hukuk Mahkemesinin -------- Esas sayılı dosyasının tasfiye işlemlerinden önce açıldığını, davalı şirketin icra dosyası derdestken ve dava devam ederken tasfiye olmasından hareketle işbu İstanbul 3. İcra Müdürlüğünün--------- Esas sayılı icra dosyasına mahsus olmak üzere şirketin ihyasına karar verildiğini, icra takibi ve dava derdest iken kaydın terkini ile tüzel kişiliğin sona erdiğinin kabul edilemeyeceğinden TTK 224 ve 445.maddeleri uyarınca tasfiye memurlarının alacaklıların haklarını da korumak zorunda olması nedeniyle davalı şirketin tüzel kişiliğinin ihyasına karar verilmesi ile yargılama giderleri ile vekalet ücretinin davanın açılmasında kusurlu olan diğer davalı tasfiye memuru ... üzerinde bırakılmasını, yasal hasım durumunda olan ... üzerinde bırakılmamasına karar verilmesini talep etmiştir.</code> | <code> davacı vekilince istinaf kanun yoluna başvurulması üzerine dairemize gönderilmiş olan dava dosyası incelendi.TARAFLARIN İDDİA VE SAVUNMALARININ ÖZETİ Davacı vekili dava dilekçesinde özetle; müvekkili ...'in davalı şirketin hissedarı olduğunu, davalı şirketin yönetim kurulunun ... oluştuğunu, şirketin eski yönetim kurulu başkanı ... 22.09.2015 tarihinde vefat ettiğini, şirketin Makedonya'nın başkenti Üsküp’te yapmayı üstelendiği ... adlı projelerle ilgili olarak yurt dışına çok yüksek meblağlarda para transfer etmeye başlandığını, davalı şirketin kasten iflasa sürüklendiğini, Makedonya’da bulunan projenin maliyetlerinin şişirildiğini, bu hususlarla ilgili bilgi edinme hakkının engellendiğini, müvekkiline yönelik eylemleri nedeniyle, davalı şirketin Yönetim Kurulu Başkanı ... hakkında İstanbul 42. Asliye Ceza Mahkemesinin 2016/132 E. Sayılı dosyasıyla kamu davası açıldığını, Aerodrom Makedonya operasyonunun başında bulunan ... baskı yapması sonucunda müvekkiline Makedonya resmi mercilerince yapılan inşaata ilişkin bilgi verilmediğini, genel kurul toplantısı öncesi bilgi alma hakkının da engellendiğini, gerek davalı şirketin defter, kayıt ve belgeleri üzerinde gerekse Makedonya Cumhuriyeti ile Türkiye arasındaki adli yardımlaşma anlaşması çerçevesinde Makedonya’daki inşaatlar üzerinde bilirkişi incelemesi yapılması gerektiğini, davalı şirketin 15.07.2016 tarihli genel kurulunda alınan 3 nolu kararla şirketin 2013, 2014 ve 2015 yıllarına ait yönetim kurulu faaliyet raporları ve finansal tablolarının kabul edildiğini, 4 nolu kararla şirketin yönetim kurulu üyelerinin ibrasına karar verildiğini, 5 nolu kararla şirketin 2013, 2014 ve 2015 yıllarına ait denetçi raporunun kabul edildiğini, 6 nolu kararla şirketin mevcut yönetim kurulunun, yapılan tüm usulsüzlüklere rağmen yine aynı şekilde göreve seçildiğini, 7 nolu kararla şirketin kâr dağıtımı yapmamasına karar verildiğini, 8 nolu kararla davalı şirket sermayesinin 80.000.000 TL’den 158.000.000 TL’ye çıkarıldığını, bu karara müvekkili dışında ... de muhalefet ettiğini, büyük hissedar olan ... A.Ş.'nin varlıklarının Makedonya'da bulunan ... ve ... projeleri gerekçe gösterilerek devamlı surette Makedonya’ya aktarıldığını, bahsi geçen nedenlerle davalı şirketin 15.07.2016 tarihli genel kurulunda alınan ve müvekkilinin muhalefet şerhi koyduğu, kanuna aykırı, davalı şirketi zarara uğratmaya ve yönetim kurulu üyelerine haksız çıkar sağlamaya yönelik 3, 4, 5, 6, 7 ve 8 nolu kararların iptaline, TTK'nın 449. maddesi gereğince bu kararların yürürlüklerinin dava sonuna kadar durdurulmasına karar verilmesini talep etmiştir.Davalı vekili savunmasında özetle; davacının uzun yıllar boyunca babası ve aile fertleri ile hiçbir irtibatı olmadığını, müvekkilleri tarafından İstanbul 2. Sulh Hukuk Mahkemesinin 2016/445 Esas savılı dosyası tahtında davacı ...'e karşı vasi tayini davası açıldığını, dosvanın halen derdest olduğunu, İstanbul 2. Sulh Hukuk Mahkemesinin 2016/445 Esas savılı dosyasının işbu davada bekletici mesele yapılması gerektiğini, ...A.Ş.' nin, ... Sanayi ve Tic A.Ş.'nin hisselerinin %99,87'sine,....Tic. ve San. A.Ş.' nin hisselerinin ise %95,13'üne sahip olduğundan, iki şirkette de hakim hissedar konumunda olduğunu, murisin vefatından sonra imzalanan tüm sözleşmelerin ve yapılan ödemelerin, murisin şirket adına verdiği taahhütlerinin gerçekleştirilmesi amacına yönelik olduğunu, müvekkillerinin sadece projelerin tamamlanmasını sağlayacak ticari ve mali riskler oluşturmayacak kararlar aldığını, bilanço ve mali tabloların genel kuruldan önce şirket merkezinde pay sahiplerinin inceleyebilmesi için TTK hükümlerine uygun olarak süresi içinde hazır bulundurulduğunu, .... A.Ş. tarafından davacıya gönderilen ihtarname ile davacının Şirket'e sormuş olduğu soruların yanıtlandığını, Şirket'in tüm bilgi ve belgelerinin ... Anonim Şirketi ile paylaşıldığını, özel denetçi raporunun davacıya bizzat teslim edildiğini, ... Sanayi Ve Ticaret A.Ş. yönetim kurulu toplantısında Mekedonya' da yapımı devam eden inşaatın finansman ihtiyacının sağlanması için sermaye artırımına gidilmesine, sermaye artırımı işlemleri gerçekleşene kadar geçecek süre içinde şirket ortaklarından olan .... A.Ş.’den avans alınmasına ve alınan paraların sermaye avansı şeklinde değerlendirilmesine karar verildiğini, ...A.Ş.'nin büyük ortağı olduğu ... Sanayi ve Tic A.Ş.'nin şubesi ... A.D'nin sözleşmede belirtilen süre içinde projeleri tamamlama yükümlülüğü altına girdiğini, belirtilen süre içinde projelerin tamamlanmaması halinde sözleşmenin tarafı olan idarelere tek taraflı fesih hakkı tanındığını ve ihale bedelinin %20'sine varan cezai şartlar öngörüldüğünü belirterek, haksız ve kötü niyetli davanın reddine karar verilmesini, İstanbul 2. Sulh Hukuk Mahkemesinin 2016/445 Esas sayılı dosyası tahtında davacıya karşı ikame edilen vasi tayini davasının bekletici mesele yapılmasını, neticede davanın reddine karar verilmesini talep etmiştir.</code> | <code>0</code> | | <code>Davacı vekili dava dilekçesinde özetle; -------teminatından ödenen hasar bedelinin zarar sorumlusu olduğu öne sürülen taşıyıcıdan rücuen tahsilini teminen başlatılan icra takibine vaki itirazın istemi ile ikame edilen, bu yönüyle de halefiyet ilkesine dayanan işbu davada sayın davacı vekili ---- harçlandırdığı dava dilekçesinde------- emtianın ---- dava dışı akdi taşıyıcı---- taşıyıcısı olan davalı şirketin sorumluluğu altındaki ------------ plakalı araçla ----- olarak taşındığını ancak nakliye süreci nihayetinde davalının sorumluluğu altında taşınan ------- müvekkilinin sigortalısı konumunda olan alıcısı emrine, araç sürücüsünün iştiraki sağlanmak suretiyle düzenlenen tutanağa kayden hasarlı vaziyette teslim edildiğini, olayın müvekkiline bildirilmesi üzerine görevlendirilen bağımsız eksperin mahallinde yaptığı hasar tespit çalışması sonucuna göre belirlediği ---tutarındaki hasar bedelini------ eden müvekkilinin TTK md.1472'ye göre sigortalısının haklarına halef olduğunu, ödenen tazminatın dava konusu emtiayı teslim aldığı andan teslim edinceye kadar ziya ve hasarının tamamından sorumlu olan davalıdan rücuen tahsilini teminen icra takibi başlatıldığını, ancak davalı tarafın aleyhine yürütülen takibi haksız yere yaptığı itirazla durdurduğu için işbu davanın açılması zarureti doğduğunu gerekçe göstermek ve müvekkilinin fazlaya ilişkin tüm haklarını da saklı tutmak suretiyle) özetle; davanın kabulüne, davanın dayandığı icra takibine vaki itirazların kaldırılmasına ve kaldığı yerden devamına ----- karar verilmesini ve davalı borçlu aleyhine %20'den az olmamak üzere icra inkâr tazminatına hükmedilmesini talep etmiştir.</code> | <code>Davacı vekili dava dilekçesinde özetle; Davalılardan ... A.Ş tarafından, davacı müvekkiller ... Tic.Ltd Şti ve ... A.Ş ‘de aralarında olduğu Borçlu aleyhine İstanbul... İcra Md. ... E Sayılı dosyasındamn Kambiyo senetlerine özgü haciz yoluyla İcra takibi başlattığı, Takip konusu .../Dikmen Şubesine ait ... Seri nolu 13.000 TL bedelli çek Zayi bir çek olup, bu çekin de aralarında bulunduğu toplam 29 adet çek ve 2 adet senet 10.08.2018 tarihinde müvekkillerden çek lehtarı ... Ayakkabı Paz. A.Ş’nin yetkilisi ...’in arabasının camı kırılmak suretiyle çalındığı, çeklerin büyük kısmının ... tarafından bir sonraki gün müşterilere verilmek üzere imzalanıp kaşelenmek suretiyle cirolanmış halde bulunmakta ise de huzurdaki dosyaya konu çek cirosuz halde çalındığı, çekin son yetkili hamili ... Ayakkabı A.Ş yetkilisi ... tarafından derhal ... C.Başsavcılığına ... Soruşturma sayılı dosyasından suç duyurusunda bulunulduğu, ayrıca Bakırköy ...ATM ...E Sayılı dosyasından zayi nedeniyle iptal davası açıldığı, mahmece 04.09.2018 tarihinde çalıntı çeklerle ilgi ödemeden men kararı verildiği ve ilgili banka şubelerine yazı yazıldığı, ancak, bahse konu soruşturma ve çek iptali davasının devam ettiği süreçte çalınan çeklere ilişkin icra takipleri açılmaya başlandığı, huzurdaki menfi tespit ve istirdat davasına konu İstanbul ... İcra Md. ... E Sayılı dosyası ve diğer takip dosyalarında, dosya alacaklısı ... A.Ş olduğu, takiplerin bir kısmına Takibin iptali davaları açıldığı, takip ve dava konusu çekte yer alan ... Ayakkabı Paz A.Ş cirosunun tümüyle sahte olduğu, cirodaki şirket kaşesi sahte olduğu gibi Kaşe üzerindeki imza da şirket yetkilisinin eli ürünü olmadığı, bu hususta imza incelemesi talep edildiği, huzurdaki menfi tespi ve istirdat davasına konu icra takibinin ilişkisi olduğu çek de müvekkillerden keşideci ... Kundura Tarafından 30.09.2019 tarihli olarak keşide edilmiş olmasına rağmen çalınmasından sonra keşide tarihi 31.12.2018 olarak değiştirildiği ve bu şekilde takibe konulduğu, keşide tarih değişikliği de sahte imza ile paraflandığı, bu da müvekkile ait olmadığı, çekin çalınmasından sora tahrif edildiği, gerek huzurdaki davada, gerekce diğer takiplere konu çeklerdeki ciro silsilesinindeki imzaların benzerlik arz ettiği, dava konusu çekte ... Ayakkabı cirosu sonrasında farklı üç şirket (...-...,... ve ...) ait ciro imzaları gözle görülebilir ölçüde aynı olduğu, Takiplere konu edilen çekler arkasındaki...silsilesindeki şirketlerin her bir silsilede aynı olması ve söz konusu çeklerin nerdeyse tamında davalı ... A.ş tarından takibe konulmuş olmasının tesadüfi olmadığı, çeklerin lehdarı olan ... Ayakkabı A.Ş ‘nin çeklerde yer alan kendisinden sonraki ciro silsilesindekilerle hiçbir ticari ilişkisi bulunmadığı, Ancak, bahse konu icra takiplerine muhatap olunmaya başlanması ile örneğin, ... Yazılım Ltd , ... A.Ş, ... İnş. San Tic Ltd Şti , ..., ... isimli şirketlerin en az birinin her bir takipteki ciro silsilesinde yer aldığı, arabadan çalınan çeklerle ilgili bir doalndırıcılık eyleminin mağduru oldukları açık olan müvekkillerden ... Ayakkabıcılık A.Ş İstanbul ... İcra Md. ...E Sayılı dosyasından tebliğ alınan ödeme emriyle birlikte Mağduru oldukları eylemin, arabadan çalınançeklerin boyutunu aşan çok daha geniş kapsamlı bir eylem olduğunun anlaşıldığı, müvekkile 24.01.2019 günü tebliğ edilen İst... İcra Md ...E Sayılı ödeme emri ekindeki çekin ciro silsilesi içinde yer aldığını, şirket kaşe ve üzerinde atılşı imzanın sahte olduğu ciro silsilesinde kendisinden sonra gelen yukarıda belirtilen şirketler yer akldığı ve ... Yazılım Ltd Şti’ne ait olduğunun, takibin de ... A.Ş tarafından açıldığının görüldüğü, icra Takiplerinin ihtiyati haciz talepli başlatıldığından müvekkil ... A.Ş’nin hesaplarına bloke konulduğu, davaya konu İstanbul ... İcra Md. ... E Sayılı dosyası ile açılan takip kapsamında müvekkil ... A.Ş yetkilisi ... tarafından gere kendi firması gerekse ... Ltd Şti hakkında tesis edilecek haciz ve bloke işlemlerinden korunabilme adına 17.141.35 TL tutarlı kapak hesabı dosyaya yatırıldığı, huzurdaki dosyaya ilişkin olarak TTK.5/A maddesi uyarınca 28.01.2019 tarih ... başvuru numarası ile ...Arabulucuk Bürosuna başvurulduğu ve taraflar arasında ... arabuluculuk numaralı dosya ile arabuluculuk sürecine girildiği, açıklamalar çerçevesinde çeki takibe kayan ... Faktoring A.Ş’nin iyi niyetli hamil olduğu iddiasına bulunamayacağı, zira üzerinde tahrifat yapılan, ciro silsilesinde en saf bakışla dahi şüpheli görünen çekleri, haklarında hiç bir soruşturma yapmadan ve keşideciyi aramadan almış olması, özellikle de bir finans kuruluşu olduğu dikkate alındığında basiretli tacir olma gereklerine ve yükümlülüklerine aykırı düştüğü, davalının ağır kusurlu olmaktan öte kötü niyetle hareket ettiğini gösterdiği, çalıntı ve tahrifatlı çeke dayalı olarak İstanbul ... İcra Md.... E Sayılı dosyası ile açılan icra takibine iliişkin olarak her iki müvekkil yönünden takip konusu çekten dolayı davalılara borçlu olmadıklarını ve müvekkillerden ... Ayakkabı A.Ş tarafından dosyaya yapılmak zorunda kalınan 11.141.35 TL’nın ödeme tarihinden itibaren işeyecek temerrüt faizi ile birlikte davalılardan ... Faktoring A.Ş’den istirdatına ve takibe konu alacağın %20’den az olmamak üzere kötü niyet tazminatına, yargılama giderleri ile vekalet ücreti,nin davalılara yükletilmesine karar verilmesi talep edilmiştir.</code> | <code>1</code> | * Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss) ### Evaluation Dataset #### MesutDemirel/legal_nli_tr_v1 * Dataset: [MesutDemirel/legal_nli_tr_v1](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1) at [7f0c3ba](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1) * Size: 5,000 evaluation samples * Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | premise | hypothesis | label | |:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------| | type | string | string | int | | details | <ul><li>min: 36 tokens</li><li>mean: 286.62 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 276.45 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>0: ~39.00%</li><li>1: ~44.30%</li><li>2: ~16.70%</li></ul> | * Samples: | premise | hypothesis | label | |:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------| | <code>Davacı vekili dava dilekçesinde özetle; Davacı şirketin taşıyan sıfatıyla davalı şirkete ait yükü kendisi ile yapılan taşıma sözleşmesi uyarınca ... Limanından ... tarihinde yükleyerek .../ ... Limanı’na taşıdığını ve yükü ihtiva eden 3 adet konteyneri liman sahasına kapalı ve mühürlü olarak ... tarihinde gemiden tahliye ettiğini, ... numaralı konişmentoda belirtildiği üzere, söz konusu deniz taşıma işinde davacı şirkete ait ‘...’ numaralı 3 adet konteynerin kullanıldığını, taşıma konusu yüklere ilişkin varış ihbarlarının düzenlendiğini ve yüklerin tahliye edildiğini, bugüne dek söz konusu yüklerin teslim alınmadığını, yüklerin konişmentolarda öngörülen süre içerisinde gönderilen tarafından teslim alınmaması nedeniyle, davacı şirket tarafından yapılan bütün iyiniyetli girişimlerin sonuçsuz kaldığını, aradan geçen yaklaşık 11 aylık süre zarfında yükün teslim alınmadığını, konteynerlerin tahliye edilmediğini, konteynerlerin tahliye edilmemesi üzerine davacı taşıyan şirket çalışanı tarafından, davalıya müteaddit defa ihtar yapıldığını ve bilgi istendiğini, ancak aradan geçen bunca süre zarfında davalının mevzubahis süreçten haberdar olduğunu belirtmesine rağmen herhangi bir ödeme yapmadığını ve görüşmelerden herhangi bir netice alınamadığını, sonuç olarak davacı şirket tarafından deniz nakliyatı işinde kullanılan üç adet konteynerin ... Liman sahasında dolu olarak bekletildiğini, davacının söz konusu konteynerleri deniz nakliyatı işinde kullanmaktan mahrum kaldığını, uyuşmazlığın konusunun, davacı şirkete ait ve taraflar arasındaki navlun sözleşmesi uyarınca deniz nakliyatında kullanılan konteynerlerin konişmentolarda öngörülen on günlük süre içerisinde (free time) iade edilmemesi sebebiyle oluşan demuraj alacağı talebine ilişkin olduğunu, konişmentolar incelendiğinde konteynerlerin on günlük süre sonunda iade edilmemesi halinde, günlük olarak belirli bir ücretin ödeneceği yönünde hükmün bulunduğunu, TTK m, 1207 hükmünün "Gönderilen, eşyanın teslimini isteme hakkını kullanmazsa, taşıtan navlun sözleşmesi gereği navlunu ve diğer alacakları taşıyana ödemekle yükümlüdür. " şeklinde düzenlendiğini, somut uyuşmazlık bakımından navlun sözleşmesinin taraflarının taşıyan olarak davacı şirket ile taşıtan olarak davalının bulunduğunu, navlun sözleşmesi nedeniyle oluşan navlun ücreti ile genel olarak navlun teferruatı olarak nitelendirilen masrafların borçlusunun yine taşıtan olduğunu, zira gönderilenin yükü teslim almaması nedeniyle, TTK m. 1203 vd. uyarınca davalı taşıtanın oluşan demuraj alacağından doğrudan sorumlu olduğunu, bunun yanında konişmentoda yer alan hükümler uyarınca, her biri ... olan konteyner bedellerinin de davacıya ödenmesi gerektiğini, bu bedelden de taşıtanın sorumlu olduğunu belirterek fazlaya ilişkin hakları saklı kalmak kaydıyla davacı şirkete ait konteynerleri navlun sözleşmesinin tarafı olan davalının kusuruyla tahliye edilmemesi nedeniyle oluşan demuraj ücretine mahsuben şimdilik 41.400,- USD ve 3 konteyner için 12.000,-USD olmak üzere toplam 53.400,- USD’nin dava tarihinden itibaren işleyecek 3095 sayılı Kanun' un 4/a fıkrası uyarınca hesaplanacak faizi ile birlikte davalıdan tahsiline, yargılama giderleri ile vekâlet ücretinin davalıya yükletil meşine karar verilmesini talep ederek iş bu davayı açmıştır.</code> | <code>Davacı vekili dava dilekçesinde özetle; Davalı tarafın taşıyan müvekkili ... A/Ş vasıtası ile ... numaralı konişmento tahtında ... numaralı 1 adet 40'lık REEFER tip konteyner muhteviyatı yükünü Hindistan'ın Cochin Limanından Gemlik Limanı' na denizyolu ile taşıttığını, bu taşımalarda davalı yanın ithalatçı ve taşımaya ilişkin konişmentoya göre yük alıcısı konumunda olduğunu, davalının ithalatçısı ve yük alıcısı olduğu ... numaralı konişmento tahtında taşınan 1 adet 40 'lık reefer konteynerin yükleme limanı olan Hindistan' in Cochin Limanı' nda 11.07.2017 tarihinde gemiye yüklendiğini ve 28.08.2017 tarihinde Gemlik ... Limanı' nda gemiden tahliye edildiğini, davalının ... numaralı konişmento tahtında taşman emtiaları tahliye limanı olan Gemlik Limanı' na ulaşmadan önce davalıya bir örneği delil listelerinde sunulan "..." yani "Varış İhbarnamesi" gönderildiği ve davalının yükünün 28.08.2017 tarihinde Gemlik Limanı' na ulaşacağının ihbar edildiğini, tahliye limanındaki konteyner muhteviyatı yükün konteynerden boşaltılması için serbest ve ücretsiz sürenin (starya süresi) 3 gün olduğunu, davalının 3 günlük serbest ve ücretsiz süre (starya süresi) içinde bu yükünü konteynerden boşaltması aksi halde günlük değişken oranlara demuraj ücretlerinin uygulanacağı belirtilerek tahliye limanında uygulanan demuraj tarifesi bildirildiğini, bu bilgiler ışığında müvekkili taşıyanın dava konusu edilen konteyneri, davalı tarafından bu yüklerin tahliye limanı olan Gemlik ... Limanı' na 28.08.2017 tarihinde varmasını ve gemiden tahliye edilmesini müteakip davalıya verilen 0-3 günlük serbest ve ücretsiz sürenin sonu olan 31.08.2017 tarihi ile bu konteynerin boş olarak müvekkiline iade edildiği 16.11.2017 tarihleri arasında 78 gün davalı tarafından fuzuli işgal edildiğini, bu tarihler arasında davalı aleyhine demuraj ücreti işletildiğini, müvekkilin konteyneri tahliye limanına varmasını müteakip, davalıya verilen 3 günlük serbest ve ücretsiz sürenin sonu ile bu konteynerin boş olarak müvekkile iade edildiği 16.11.2017 tarihleri arasında 78 gün davalı tarafından fuzuli işgal edilmiş olduğundan bahisle yapılan ödemeler düşüldükten sonra bakiye kalan faturaya bağlı 10.579,77 USD tutarında bakiye demuraj alacağının ödenmediğini belirterek davanın kabulüne, davalının haksız olarak ... İcra Müdürlüğü' nün ... Esas sayılı takip dosyasına yapmış olduğu itirazın iptaline, davalının işbu takibe haksız olarak itiraz ettiğinden bahisle davalı aleyhine %20' den aşağı olmamak üzere icra inkar tazminatına hükmedilmesine, asıl alacaklarına 3095 s. Yasanın 4/a maddesi uyarınca faiz işletilmesine, yargılama giderleri ile vekalet ücretinin karşı tarafa yükletilmesine karar verilmesi talep etmiştir. </code> | <code>0</code> | | <code> Davacı vekili dava dilekçesinde özetle; Davacı ... A.Ş.'nin 1986 yılından beri Irak piyasasında iş yapan ve gerek iş ahlakı ve gerekse dürüstlüğüyle tanınan ve dolayısıyla Irak'ta yapılacak yeni bir iş olduğunda, ilk haberdar edilen bir firma olduğunu, 1989 yılında da İrak'a daimi ofisini açtığını, 2001 yılında ilgili bakanlığın davacı şirketten Saf Bakır Şerit talebinde bulunduğunu, davacının da bunu temin etmek için davalı şirketle ilişki kurduğunu, davalı şirketin Irak'ın talep ettiği spesifikasyonda mal üretecek araca sahip bulunmadığını beyan etmesi üzerine, davacı şirketin bu konuda da yardımcı olduğunu ve üretimi gerçekleştirecek makinelerin davalı tarafından teminine hem teknolojik bilgi ve hem de maddi katkıda bulunduğunu, böylelikle ilk olarak 2002 yılında, davalının ürettiği malların davacı şirket tarafından Irak'a pazarlandığını, bu arada Amerika Irak'ı istila edince, ilişkilerin bir süre askıda kaldığını ve nihayet 2006 yılında Irak Sanayi Bakanlığı'nın davacı şirketi yeniden davet ettiğini, aynı mal için bağlantı kurduğunu ve ilişkinin yeniden devam etmeye başladığını, bu suretle, 2001 yılında 195 ton, 2007'de 42 ton 400 kg, 2008'de 160 ton, 2009'da 234 ton 050 kg, 2010'da 40 ton 400 kg, 2011 'de 182 ton 248 kg ihracat gerçekleştirildiğini, 2009 Yılına kadar ihracat partisi bazında sürdürülen Tek Satıcılık anlaşmasının, 2009 yılında sürekli Tek Satıcılık sözleşmesine dönüştürüldüğünü ve bu sözleşmenin de beş yıl süre ile bağıtlandığını, ne var ki, 2012 yılından itibaren davalı davranışlarının garip bir hal almaya başladığını ve kendilerine bildirdikleri ihalelere katılabilmeleri için bazı belgelerin verilip, alıcıya ibrazı gerekmesine rağmen, davalının yazılı ve telefonla vaki ihtarlarının hiç birini cevaplamadığını ve 2012 yılından itibaren davacının çalışmasını baltaladığını, davalıya yaptıkları son ihtara da, davalı şirketin gerçek dışı cevap verdiğini. davalının imzalattığı 2009 tarihli Tek Satıcılık Sözleşmesi'nin davacının her türlü rekabetini önleyici ve bu malı başka üreticilerden sağlamasını engelleyici hükümler taşıdığını, davacı şirket açısından adeta bir esaret sözleşmesi niteliği taşıdığını, davalı şirketin, hem davacı şirketin Tek Satıcılık görev ve kazancını engellediğini, hem de bunu giderebilecek başka alternatiflerin kullanılması imkânlarını da sözleşme ile ortadan kaldırdığını, böylelikle davalının, bir taraftan Tek Satıcılık Sözleşmesini ihlal ederken, diğer taraftan da haksız rekabette bulunarak davacının o açıdan da zarara uğramasına sebebiyet verdiğini belirterek, davalının sözleşmeyi ihlal ettiğinin tespiti ile Irak'a 2012-2014 yılları arasında bizzat veya başkaları marifetiyle mal satıp, satmadığının tespitine, bu nedenle uğranılan zararın tespiti ile bu zarara mahsuben şimdilik 10.000,-USD'nin davalıdan tahsiline, taraflar arasındaki münhasır Tek Satıcılık Sözleşmesi'nin 26.02.2014 tarihinde sona ermiş bulunması sebebiyle, 2001 yılından itibaren süregelen bu başarılı ilişki nedeniyle müvekkili şirket adına uygun bir denkleştirme bedeli tespit ve tayinine ve fazlaya ait talepleri mahfuz kalarak, bu kalem için de davalıdan şimdilik 10.000,-USD'nin tahsiline, davalının, sözleşmeyi ihlal fiilinin dışında, ayrıca haksız rekabette bulunduğunun tespiti ile davalının bizzat veya dolaylı olarak gerçekleştirdiği ihracatlar nedeniyle, T.T.K.'nun 55. ve müteakip maddeleri gereğince, ihracat bedellerinin müvekkili şirkete intikal ettirilmesine ve bu kalem için şimdilik 1.000,- USD'nin davalıdan tahsiline, davacı şirket dışında gerçekleştirilen ihracat nedeniyle hak kesbedilen ücretlerin hangi tarihlerde muaccel oldukları gözetilerek, o tarihlerden itibaren bu alacaklara faiz tahakkuk ettirilmesine karar verilmesini talep ve dava etmişlerdir. Davalı vekili cevap dilekçesinde özetle; Davalı şirket ile davacı ... A.Ş. arasında 23.02.2009 tarihli Yetkili Satıcı Sözleşmesinin İmzalandığını, sözleşme gereği Irak Bölgesi sınırları içerisinde 5 yıl süre ile davalı tarafından üretilen malların satıcı ... tarafından satılacağını, davacı tarafından iddia edilen Irak'ta ihalelere girebilmek için gerekli belgelerin davalı şirketten istenilmesine rağmen cevap mahiyetinde dahi geri dönüşlerin olmadığı hususunun gerçeği yansıtmadığını, davalı şirketten istenilen her türlü belgenin yetkililerine istenildiğinde verildiğini, kaldı ki ... A.Ş.' nin Irak devleti sınırlarında ülke içindeki iç karışıklıklardan dolayı iş alamamakta olduğunu ve bundan dolayı da davalı şirketten belge ve sair her hangi bir evrak talebinde bulunmadığının da açıkça ortada olduğunu, davacı şirketin zarara uğramasında sözleşmeden dolayı davalı şirketin hiçbir kusurunun bulunmadığını, tam tersine davacı şirket tarafından Yetkili Satıcı Sözleşmesi gereğince üretilecek ürünler hususunda bilgi verilmesi ve talepte bulunulması, ihale alınması gerektiği halde bu yükümlülüklerin yerine getirilmediğini ve bundan dolayı taraflar arasındaki gereken iş birliğinin gerçekleşmediğini, davalı şirketin, davacı şirket ile birlikte geçmişte yaptığı işler dışında Irak ülkesinde başkaca bir iş yapmadığını ve aralarındaki sözleşmeye uygun davrandığını, hatta davalı şirketçe 20.05.2014 tarihinde davacı şirketlerden ...'a yazı yazılarak birlikte çalışmaya devam edebilmek için gereken hassasiyetin gösterildiği, iş alınması durumunda birlikte çalışılacağı, kendilerinden üretim hususunda bir talepte bulunulmadığı için farklı ülkeler ile çalışılmak zorunda kalındığının açık ve net bir şekilde belirtildiğini, buna rağmen davacı şirketçe hiçbir şekilde Irak ülkesi'nden ihale alınmadığını ve üretim yapılmasının davalı şirketten talep edilmediğini, bu şartlarda açılan davanın hiçbir temelinin bulunmadığını, davacının denkleştirme talebinin yersiz olduğunu, bu talebin 2001 yılından beri talep edilmesinin sözleşme ile bağdaşmadığı gibi, taraflar arasındaki sözleşmenin 2009 yılında akdedildiğini, davacı tarafından yapılan satış işlemleri neticesinde iş çevrelerinin genişlemesi ve iş potansiyellerinin artmasının söz konusu olmadığı gibi davalı şirket nezdinde yarar sağlayıcı bir durum da olmadığını beyanla, davanın reddine karar verilmesini talep etmişlerdir. Davalı vekili 27/02/2019 tarihinde cevap dilekçelerini tamamen ıslahla; 23/02/2009 tarihli yetkili satıcı sözleşmesinin davalı şirket ile davacı ... A.Ş arasında imzalandığını, dolayısıyla diğer davacı yönünden husumet itirazı ile davanın usulden reddine karar verilmesini, haksız rekabete ilişkin davalarda zamanaşımının fiilin öğrenildiği tarihten itibaren 1 yıl olduğunu, davacı yanın haksız rekabet tazminatına ilişkin taleplerine karşı zamanaşımı def'inde bulunduğunu, esasa ilişkin olarakda; 23/02/2009 tarihli "Yetkili Satıcı Sözleşmesi"nin taraflar arasında sadece centilmenlik ve iyi niyet göstergesi olarak imzalandığını, davalı şirketin müşterek imza ile temsil edilmesi gerektiği halde sözleşmede sadece bir imzanın bulunmasının da bunun göstergesi olduğunu, dolayısıyla sözleşmenin hukuken geçerliliği olan sarih bir sözleşme olmadığını, kaldıki akdedilen sözleşmenin tek satıcılık sözleşmesi olmadığını, tek satıcılık sözleşmesi ile yetkili satıcılık sözleşmesi arasında hukuki mahiyeti ve sonuçları itibariyle farklılıkların bulunduğunu, sözleşmenin sorumluluklar başlıklı 4.maddesinin b bendinde "aynı şekilde üretici Irak pazarına başka bir aracı ile girerse, satışı gerçekleşen mal bedelinin % 5'i tutarındaki kısmını cezai müeyide olarak def'aten ve nakden temsilci ... A.Ş.ye ödemek zorundadır." denilmek suretiyle sözleşmenin tek satıcılık sözleşmesi niteliğinde olmadığının vurgulandığını, ayrıca davalı vekiledeni şirketin kendisi tarafından Irak ülkesine mal satışının mümkün olduğunu, davacı şirket dışında başka bir aracı şirket kullanılmaması gerektiğinin tarafların serbest iradeleri ve sözleşme serbestliği ilkesine göre hüküm altına alındığını, dolayısıyla davalı şirketin anılan sözleşmeden doğan hukuki sorumluluğunu ihlal etmediğini, Irak pazarına başka bir aracı ile değil doğrudan ihale yoluyla bizzat mal satışı yaptığını, Sözleşmenin başka bir aracı ile bir kısmının ihlal edilmesi sebebiyle cezai şartın doğmayacağını, taraflar arasında tek satıcılık sözleşmesi bulunmadığını, sözleşmeye acentelik hükümlerinin de uygulanmayacağını, kaldı ki davalı şirket ile davacı ... arasında 2010 yılında Irak'a mal satımına ilişkin ihracat kayıtlı mal kaydı yapıldığını, davacıların farklı tüzel kişilikler olmasına rağmen, gerçek kişi ortakları yönünden aralarında organik bağ bulunduğunu ve davacı ...'ın davalı ve ... arasındaki sözleşmeyi bildiğini, hiçbir şekilde kabul anlamına gelmemekle birlikte taraflar arasındaki yetkili satıcılık sözleşmesinin tek satıcılık sözleşmesi olduğu farz edildiğinde dahi, sözleşmeyi ihlal edenin bizzat davacı ... A.Ş. olduğunu, zira diğer davacı ile aralarındaki fiili durumu bilmesine rağmen bunu kabul ettiğini, Davacı yanın TTK anlamında acenta olarak görülemeyeceğini, taraflar arasında fiili bir mal satışı olduğunu, o halde TTK'nun denkleştirme istemine ilişkin 122.maddesinin somut olayda uygulanamaycağını, davalı vekiledeni tarafından sözleşme ilişkisinin sona ermesinden sonra davacı şirketin müşterilerinden önemli menfaatler elde etmediğini veya davacının kazandırdığı müşteriler ile iş yapılmadığını, aksi kabul halinde denkleştirmenin ödenip ödenmemesi veya ne oranda ödeneceği hususunda hakkaniyet indirimi yapılması gerektiğini, Yine davacı yanın haksız rekabet tazminatı yönünden hem yoksun kaldığı karın tazminini, hemde davalının elde etmesi mümkün görülen kazancının talep edilemeyeceğini, davacının bunlardan birini seçmek zorunda olduğunu, keza haksız rekabet sebebi ile tazminat talebinin koşulları olan dürüst davranma kuralına aykırılık ve kusurun, somut uyuşmazlıkta mevcut olmadığını beyanla, haksız ve mesnetsiz davanın öncelikle husumet yokluğu ve zamanaşımı yönünden usulden, aksi halde davanın esastan reddine karar verilmesini talep etmişlerdir. </code> | <code>Haksız rekabete ilişkin<br>bu Kısım hükümlerinin amacı, bütün katılanların menfaatine, dürüst ve bozulmamış<br>rekabetin sağlanmasıdır.Rakipler arasında veya tedarik edenlerle müşteriler<br>arasındaki ilişkileri etkileyen aldatıcı veya dürüstlük kuralına diğer şekillerdeki<br>aykırı davranışlar ile ticari uygulamalar haksız ve hukuka aykırıdır.</code> | <code>2</code> | | <code> Davacı vekili dava dilekçesinde özetle; Müvekkili şirketin perakende sektöründe ağırlıklı olarak elektronik cihazların satışı işiyle iştigal ettiğini ve tüketiciler tarafından çeşitli şikayetlerle kendisine teslim edilen ürünleri, teknik servis olarak faaliyet gösteren belirli şirketlere onarım için yönlendirdiğini, bu lojistik faaliyetlerin zaman zaman, kargo şirketi olarak faaliyet gösteren davalı taraf ile gerçekleştirildiğini, ... A.Ş.'nin, müvekkili şirketin ticari ilişkileri kapsamında belirli ürünlerini teslim ettiği bir yetkili teknik servis olarak faaliyet gösterdiğini ve belirli cihazları onarım için teslim aldıktan sonra yine müvekkili şirkete teslim ettiğini, bu operasyonların dış lojistik tarafının da ...'nin anlaşmalı olduğu kargo şirketi olan davalı taraf ile gerçekleştirildiğini, bu ticari ilişki sebebi ile yedi adet cep telefonun da onarım için ...’ne gönderildiğini ve ...’nde işleme tabi tutulan 7 adet telefonların gönderici sıfatı ile ... tarafından müvekkili şirkete teslim edilmek üzere kargoya verildiğini, 19/02/2017 tarihinde diğer ürünlerin teslim edildiğini, ancak yedi adet cep telefonunun teslim edilmediğini, teslim edilmediğinin farkına varılmasının ardından müvekkili şirketin yetkililerinin gecikmeksizin davalı yetkililerine bilgi verdiğini ve sorunun çözülmesini talep ettiklerini ve yine ... yetkilileri ile de koordinasyon halinde olunduğunu, ...’nden alınan bilgi uyarınca da, "içerisinde 7 (yedi) adet cep telefonunun yer aldığı kolinin, müvekkili şirkete teslim edilmek üzerine kargoya verildiğini, ancak ilgili kolinin, müvekkili şirketin İzmit ... Mağazası yetkililerine 19/02/2017 tarihinde ve sonrasında teslim edilmediğini" tespit ettiklerini, İzmit ... Mağazası’nın kamera kayıtları incelendiği takdirde kolilenmiş bir kargonun müvekkili şirkete hiçbir zaman teslim edilmediğinin anlaşılacağını, bunun üzerine davalıdan ilgili ürünlerin tazminine ilişkin işlemlerin başlatılmasını talep ettiklerini, ancak davalı şirketin, kendi yetkililerine izletilen kamera görüntüleri sonucunda söz konusu ürünlerin "..teslim edilmediğini şifahen ikrar etmesine rağmen" sorumluluğunu bir türlü yerine getirmediğini, kendilerine gönderilen e-maillere ise şirket yetkililerinin cevabının "..kolinin akıbetinin bilinemediği" olduğunu, davalı şirketin yükün başına ne geldiğini açıklayamıyor olmasının, kendilerinin kasta eşdeğer kusurları bulunduğunu ve zararlarının tamamını karşılamaları gerektiğini gösterdiğini belirterek, 9.248,00 TL tutarındaki zararın olayın meydana geldiği tarihten itibaren işleyecek ticari temerrüt faizi ile birlikte davalıdan tazminine karar verilmesini talep ve dava etmiştir. Davalı vekili cevap dilekçesinde özetle; Dava dilekçesinde davaya konu taşımaya ilişkin herhangi bir taşıma fatura bilgisi verilmediğini, taraf isimlerine bağlı olarak müvekkili şirket kayıtlarında yapılan araştırma neticesinde herhangi bir taşıma kaydına rastlanılmadığını, dolayısıyla davacının hangi taşımaya konu kargo ile ilgili dava açtığının net ve belirgin olmadığını, yine taşımaya konu kargonun içeriğini ispata yönelik herhangi bir fatura ve irsaliye dahi bulunmaksızın tazmin talebinde bulunulduğunu, taşıma işinin müvekkili tarafından yapıldığının kabulü anlamına gelmemekle birlikte, taşımaya konu edildiği iddia edilen kargonun davacı tarafından da açıkça belirtildiği üzere tamire gönderilen, ikinci el (kullanılmış ve arızalı) bir ürün olduğunu, tamire gönderilen ikinci el bir ürünün tamir kabul etmeyecek durumda bir hurda olmasının muhtemel olduğunu, ancak taşımaya ilişkin bir bilgi taraflarına sunulmadığından bu hususta araştırma yapmanın da mümkün olmadığını, her şeyden önce, TTK 886 uyarınca tam tazminata hükmedilebilmesi için zararın meydana gelmesinde taşıyıcının kast ve pervasız davranış kusuru varlığının da ispat edilmesinin gerektiğini belirterek, davanın reddine karar verilemisini talep etmiştir. </code> | <code>Zarara, kasten veya<br>pervasızca bir davranışla ve böyle bir zararın meydana gelmesi ihtimalinin bilinciyle<br>işlenmiş bir fiilinin veya ihmalinin sebebiyet verdiği ispat edilen taşıyıcı veya<br>879 uncu maddede belirtilen kişiler, bu Kısımda öngörülen sorumluluktan kurtulma<br>hâllerinden ve sorumluluk sınırlamalarından yararlanamaz.</code> | <code>2</code> | * Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss) ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `fp16`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 32 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | loss | sts-dev_spearman_cosine | |:------:|:----:|:-------------:|:------:|:-----------------------:| | 0 | 0 | - | - | 0.2345 | | 0.0792 | 500 | 0.1311 | 0.0141 | 0.2036 | | 0.1584 | 1000 | 0.0203 | 0.0158 | 0.1997 | | 0.2376 | 1500 | 0.0174 | 0.0174 | 0.1653 | | 0.3168 | 2000 | 0.0108 | 0.0136 | 0.1457 | | 0.3960 | 2500 | 0.0121 | 0.0156 | 0.2099 | | 0.4752 | 3000 | 0.0122 | 0.0140 | 0.1723 | | 0.5544 | 3500 | 0.0125 | 0.0118 | 0.2248 | | 0.6336 | 4000 | 0.0079 | 0.0115 | 0.2337 | | 0.7128 | 4500 | 0.0093 | 0.0104 | 0.2331 | | 0.7920 | 5000 | 0.0071 | 0.0107 | 0.2424 | | 0.8712 | 5500 | 0.0041 | 0.0100 | 0.2463 | | 0.9504 | 6000 | 0.0069 | 0.0098 | 0.2431 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.0.1 - Transformers: 4.42.4 - PyTorch: 2.3.1+cu121 - Accelerate: 0.32.1 - Datasets: 2.20.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers and SoftmaxLoss ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
YOLO-a1/results
YOLO-a1
2024-10-27T13:30:56Z
110
0
transformers
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large", "base_model:finetune:facebook/bart-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-27T13:06:56Z
--- library_name: transformers license: apache-2.0 base_model: facebook/bart-large tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.8962 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 2 | 6.8545 | | No log | 2.0 | 4 | 6.1114 | | No log | 3.0 | 6 | 5.8962 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
Hongsi37/roberta-base-klue-ynat-classification
Hongsi37
2024-10-27T13:25:01Z
105
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-21T12:52:04Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
James2313123/L3-DARKEST-PLANET-Seven-Rings-Of-DOOM-16.5B_3bpw-h8-EXL2
James2313123
2024-10-27T13:24:19Z
5
0
null
[ "safetensors", "llama", "exl2", "3bpw", "en", "license:apache-2.0", "3-bit", "region:us" ]
null
2024-10-27T13:00:00Z
--- license: apache-2.0 language: - en base_model: DavidAU/DARKEST-PLANET-Seven-Rings-Of-DOOM-16.5B quantized_by: James2313123 tags: - exl2 - 3bpw --- ### Model Description 3bpw-h8-exl2 quant of DavidAU's DARKEST-PLANET-Seven-Rings-Of-DOOM-16.5B Link to orginal model and creator: https://huggingface.co/DavidAU/L3-DARKEST-PLANET-Seven-Rings-Of-DOOM-16.5B
luluw/whisper-medium
luluw
2024-10-27T13:22:05Z
14
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-25T04:43:59Z
--- library_name: transformers language: - en license: apache-2.0 base_model: openai/whisper-medium tags: - generated_from_trainer metrics: - wer model-index: - name: Whisper Tiny results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Personal - Mimic Recording dataset. It achieves the following results on the evaluation set: - Loss: 0.1404 - Wer: 0.0645 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 75 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.3839 | 0.9932 | 73 | 0.1968 | 0.0975 | | 0.0763 | 2.0 | 147 | 0.1418 | 0.0879 | | 0.017 | 2.9932 | 220 | 0.1410 | 0.1200 | | 0.0058 | 4.0 | 294 | 0.1404 | 0.0645 | | 0.0014 | 4.9660 | 365 | 0.1396 | 0.0647 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
KingNish/Reasoning-Llama-3b-v0.2
KingNish
2024-10-27T13:15:22Z
180
3
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:KingNish/Reasoning-Llama-3b-v0.1", "base_model:finetune:KingNish/Reasoning-Llama-3b-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-22T15:06:04Z
--- base_model: KingNish/Reasoning-Llama-3b-v0.1 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** KingNish - **License:** apache-2.0 - **Finetuned from model :** KingNish/Reasoning-Llama-3b-v0.1 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF
mradermacher
2024-10-27T13:12:07Z
16
0
transformers
[ "transformers", "gguf", "en", "dataset:dyyyyyyyy/ScaleQuest-Math", "base_model:dyyyyyyyy/ScaleQuest-Qwen2-Math-7B-QGen", "base_model:quantized:dyyyyyyyy/ScaleQuest-Qwen2-Math-7B-QGen", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-27T12:45:57Z
--- base_model: dyyyyyyyy/ScaleQuest-Qwen2-Math-7B-QGen datasets: - dyyyyyyyy/ScaleQuest-Math language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/dyyyyyyyy/ScaleQuest-Qwen2-Math-7B-QGen <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-Qwen2-Math-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-Qwen2-Math-7B-QGen.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
EmanuelOrler/setfit-spanish-event-perspective
EmanuelOrler
2024-10-27T13:03:21Z
7
0
setfit
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "region:us" ]
text-classification
2024-10-27T13:02:50Z
--- library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: Lewis Hamilton pide perdón tras ser acusado de sexista por burlarse de su sobrino - text: 'Nuevas revelaciones del FIFA Gate: una cuenta ultra secreta y el temor reverencial a Julio Grondona' - text: Hallaron una inmensa `huella digital` en el espacio - text: Qué hacía Gastón Pauls viendo a la Selección con Lionel Messi y Sergio Agüero - text: 'Bitcoin: la volatilidad de las últimas semanas abre el debate sobre el futuro de la moneda' inference: true --- # SetFit This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit <!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) --> - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Evento | <ul><li>'El dólar vuelve a subir a la espera de una decisión clave del Banco Central'</li><li>'Viernes caluroso y sin lluvias'</li><li>'ARA San Juan | El dolor de los familiares tras la retirada de EEUU: `Nos están dejando sin recursos para buscar`'</li></ul> | | Perspectiva | <ul><li>'El futuro de la educación tras la pandemia: ¿hacia un modelo híbrido permanente?'</li><li>'¿Cómo impacta la automatización en los trabajos de baja calificación?'</li><li>'Feminicidios: Falta una construcción social y cultural contra la violencia'</li></ul> | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("EmanuelOrler/setfit-spanish-event-perspective") # Run inference preds = model("Hallaron una inmensa `huella digital` en el espacio") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 5 | 12.9231 | 24 | | Label | Training Sample Count | |:------------|:----------------------| | Evento | 22 | | Perspectiva | 17 | ### Training Hyperparameters - batch_size: (12, 12) - num_epochs: (4, 16) - max_steps: -1 - sampling_strategy: undersampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - evaluation_strategy: steps - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0159 | 1 | 0.0885 | - | | 0.1587 | 10 | 0.3927 | 0.2944 | | 0.3175 | 20 | 0.3039 | 0.2387 | | 0.4762 | 30 | 0.2466 | 0.1807 | | 0.6349 | 40 | 0.2049 | 0.1686 | | 0.7937 | 50 | 0.1803 | 0.1786 | | 0.9524 | 60 | 0.1319 | 0.2002 | | 1.1111 | 70 | 0.045 | 0.3103 | | 1.2698 | 80 | 0.0099 | 0.3200 | | 1.4286 | 90 | 0.0036 | 0.3845 | | 1.5873 | 100 | 0.0021 | 0.4078 | | 1.7460 | 110 | 0.0011 | 0.4184 | | 1.9048 | 120 | 0.0011 | 0.4186 | | 2.0635 | 130 | 0.0009 | 0.4282 | | 2.2222 | 140 | 0.0008 | 0.4242 | | 2.3810 | 150 | 0.0008 | 0.4269 | | 2.5397 | 160 | 0.0007 | 0.4303 | | 2.6984 | 170 | 0.0006 | 0.4301 | | 2.8571 | 180 | 0.0006 | 0.4321 | | 3.0159 | 190 | 0.0006 | 0.4311 | | 3.1746 | 200 | 0.0005 | 0.4291 | | 3.3333 | 210 | 0.0006 | 0.4322 | | 3.4921 | 220 | 0.0005 | 0.4315 | | 3.6508 | 230 | 0.0005 | 0.4308 | | 3.8095 | 240 | 0.0005 | 0.4307 | | 3.9683 | 250 | 0.0004 | 0.4312 | ### Framework Versions - Python: 3.10.14 - SetFit: 1.1.0 - Sentence Transformers: 3.2.1 - Transformers: 4.44.0 - PyTorch: 2.4.0 - Datasets: 2.21.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
leekh7624/model3
leekh7624
2024-10-27T12:59:49Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:leekh7624/model2", "base_model:finetune:leekh7624/model2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-27T12:55:36Z
--- base_model: leekh7624/model2 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** leekh7624 - **License:** apache-2.0 - **Finetuned from model :** leekh7624/model2 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mlconvexai/jais-13b-chat_bitsandbytes_8bit
mlconvexai
2024-10-27T12:45:29Z
11
0
null
[ "pytorch", "jais", "Transformers", "Arabic", "English", "LLM", "Decoder", "causal-", "bitsandbytes", "text-generation", "custom_code", "en", "ar", "base_model:inceptionai/jais-13b-chat", "base_model:quantized:inceptionai/jais-13b-chat", "license:apache-2.0", "8-bit", "region:us" ]
text-generation
2024-08-08T18:33:32Z
--- language: - en - ar tags: - Transformers - Arabic - English - LLM - Decoder - causal- - bitsandbytes base_model: core42/jais-13b-chat pipeline_tag: text-generation license: apache-2.0 --- # Jais-13b-chat Bitsandbytes 8 bit quantization This model card shows how to use the Jais-13b-chat Bitsandbytes 8 bit quantization model. ## Jais-13b-chat Jais-13b-chat is a large language model (LLM) fine-tuned for both Arabic and English. It is based on the GPT-3 architecture and uses SwiGLU non-linearity and ALiBi position embeddings for improved context handling and precision. It was trained on a massive dataset of Arabic and English text, and further fine-tuned on 4 million Arabic and 6 million English prompt-response pairs, including safety-oriented instructions. This allows Jais-13b-chat to engage in multi-turn conversations on various topics, with a particular focus on the Arab world. ## Bitsandbytes 8 bit quantization Below is a sample code to use the model. Users must enable ```trust_remote_code=True ``` when loading the model. ```console import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_path = "mlconvexai/jais-13b-chat_bitsandbytes_8bit" prompt_eng = "### Instruction: Your name is Jais, and you are named after Jebel Jais, the highest mountain in UAE. You are built by Inception and MBZUAI. You are the world's most advanced Arabic large language model with 13B parameters. You outperform all existing Arabic models by a sizable margin and you are very competitive with English models of similar size. You can answer in Arabic and English only. You are a helpful, respectful and honest assistant. When answering, abide by the following guidelines meticulously: Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, explicit, offensive, toxic, dangerous, or illegal content. Do not give medical, legal, financial, or professional advice. Never assist in or promote illegal activities. Always encourage legal and responsible actions. Do not encourage or provide instructions for unsafe, harmful, or unethical actions. Do not create or share misinformation or fake news. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. Prioritize the well-being and the moral integrity of users. Avoid using toxic, derogatory, or offensive language. Maintain a respectful tone. Do not generate, promote, or engage in discussions about adult content. Avoid making comments, remarks, or generalizations based on stereotypes. Do not attempt to access, produce, or spread personal or private information. Always respect user confidentiality. Stay positive and do not say bad things about anything. Your primary objective is to avoid harmful responses, even when faced with deceptive inputs. Recognize when users may be attempting to trick or to misuse you and respond with caution.\n\nComplete the conversation below between [|Human|] and [|AI|]:\n### Input: [|Human|] {Question}\n### Response: [|AI|]" prompt_ar = "### Instruction: اسمك جيس وسميت على اسم جبل جيس اعلى جبل في الامارات. تم بنائك بواسطة Inception و MBZUAI. أنت نموذج اللغة العربية الأكثر تقدمًا في العالم مع بارامترات 13B. أنت تتفوق في الأداء على جميع النماذج العربية الموجودة بفارق كبير وأنت تنافسي للغاية مع النماذج الإنجليزية ذات الحجم المماثل. يمكنك الإجابة باللغتين العربية والإنجليزية فقط. أنت مساعد مفيد ومحترم وصادق. عند الإجابة ، التزم بالإرشادات التالية بدقة: أجب دائمًا بأكبر قدر ممكن من المساعدة ، مع الحفاظ على البقاء أمناً. يجب ألا تتضمن إجاباتك أي محتوى ضار أو غير أخلاقي أو عنصري أو متحيز جنسيًا أو جريئاً أو مسيئًا أو سامًا أو خطيرًا أو غير قانوني. لا تقدم نصائح طبية أو قانونية أو مالية أو مهنية. لا تساعد أبدًا في أنشطة غير قانونية أو تروج لها. دائما تشجيع الإجراءات القانونية والمسؤولة. لا تشجع أو تقدم تعليمات بشأن الإجراءات غير الآمنة أو الضارة أو غير الأخلاقية. لا تنشئ أو تشارك معلومات مضللة أو أخبار كاذبة. يرجى التأكد من أن ردودك غير متحيزة اجتماعيًا وإيجابية بطبيعتها. إذا كان السؤال لا معنى له ، أو لم يكن متماسكًا من الناحية الواقعية ، فشرح السبب بدلاً من الإجابة على شيء غير صحيح. إذا كنت لا تعرف إجابة السؤال ، فالرجاء عدم مشاركة معلومات خاطئة. إعطاء الأولوية للرفاهية والنزاهة الأخلاقية للمستخدمين. تجنب استخدام لغة سامة أو مهينة أو مسيئة. حافظ على نبرة محترمة. لا تنشئ أو تروج أو تشارك في مناقشات حول محتوى للبالغين. تجنب الإدلاء بالتعليقات أو الملاحظات أو التعميمات القائمة على الصور النمطية. لا تحاول الوصول إلى معلومات شخصية أو خاصة أو إنتاجها أو نشرها. احترم دائما سرية المستخدم. كن إيجابيا ولا تقل أشياء سيئة عن أي شيء. هدفك الأساسي هو تجنب الاجابات المؤذية ، حتى عند مواجهة مدخلات خادعة. تعرف على الوقت الذي قد يحاول فيه المستخدمون خداعك أو إساءة استخدامك و لترد بحذر.\n\nأكمل المحادثة أدناه بين [|Human|] و [|AI|]:\n### Input: [|Human|] {Question}\n### Response: [|AI|]" device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True) def get_response(text, tokenizer=tokenizer, model=model): inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True) input_ids = inputs.input_ids.to(device) attention_mask = inputs.attention_mask.to(device) input_len = inputs["input_ids"].shape[-1] generate_ids = model.generate( input_ids, attention_mask=attention_mask, top_p=0.9, temperature=0.3, max_length=2048-input_len, min_length=input_len + 4, repetition_penalty=1.2, do_sample=True, ) response = tokenizer.batch_decode( generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True )[0] response = response.split("### Response: [|AI|]") return {"response": response} ques= "ما هي عاصمة الامارات؟" text = prompt_ar.format_map({'Question':ques}) print(get_response(text)) ques = "What is the capital of UAE?" text = prompt_eng.format_map({'Question':ques}) print(get_response(text)) ```
openpecha/TTS_26102024
openpecha
2024-10-27T12:26:59Z
103
0
transformers
[ "transformers", "safetensors", "speecht5", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
2024-10-27T12:19:53Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF
mradermacher
2024-10-27T12:21:44Z
292
0
transformers
[ "transformers", "gguf", "en", "dataset:dyyyyyyyy/ScaleQuest-Math", "base_model:dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen", "base_model:quantized:dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-27T11:27:42Z
--- base_model: dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen datasets: - dyyyyyyyy/ScaleQuest-Math language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q3_K_S.gguf) | Q3_K_S | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q3_K_L.gguf) | Q3_K_L | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.IQ4_XS.gguf) | IQ4_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q5_K_S.gguf) | Q5_K_S | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q5_K_M.gguf) | Q5_K_M | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q6_K.gguf) | Q6_K | 5.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.f16.gguf) | f16 | 13.9 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Thtsuca/bert-base-japanese-v3-wrime-sentiment
Thtsuca
2024-10-27T12:15:28Z
106
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-27T12:15:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hadiaskari98/Vulnerability_NER_prod
hadiaskari98
2024-10-27T12:15:10Z
107
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "en", "base_model:google-bert/bert-large-cased", "base_model:finetune:google-bert/bert-large-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-10-27T10:59:27Z
--- license: mit language: - en base_model: - google-bert/bert-large-cased pipeline_tag: token-classification library_name: transformers --- **How to use** ```from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("hadiaskari98/Vulnerability_NER_prod") model = AutoModelForTokenClassification.from_pretrained("hadiaskari98/Vulnerability_NER_prod") nlp = pipeline("ner", model=model, tokenizer=tokenizer) example = "This is an example of a SQL Injection attack" ner_results = nlp(example) print(ner_results)
antomtnez/MyNewModel
antomtnez
2024-10-27T12:04:48Z
12
0
null
[ "pytorch", "vit", "vision", "image-classification", "base_model:omarques/autotrain-dogs-and-cats-1527055142", "base_model:finetune:omarques/autotrain-dogs-and-cats-1527055142", "license:apache-2.0", "region:us" ]
image-classification
2024-10-16T17:57:17Z
--- license: apache-2.0 base_model: - omarques/autotrain-dogs-and-cats-1527055142 tags: - vision - image-classification pipeline_tag: image-classification ---
mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF
mradermacher
2024-10-27T12:00:07Z
7
0
transformers
[ "transformers", "gguf", "llama-factory", "en", "base_model:SvalTek/L3.2ColdBrew-ChattyRP", "base_model:quantized:SvalTek/L3.2ColdBrew-ChattyRP", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-27T11:46:13Z
--- base_model: SvalTek/L3.2ColdBrew-ChattyRP language: - en library_name: transformers quantized_by: mradermacher tags: - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/SvalTek/L3.2ColdBrew-ChattyRP <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-IQ1_S.gguf) | i1-IQ1_S | 1.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-IQ2_M.gguf) | i1-IQ2_M | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-IQ3_S.gguf) | i1-IQ3_S | 1.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-IQ3_M.gguf) | i1-IQ3_M | 1.7 | | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 2.0 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 2.0 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 2.0 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q4_0.gguf) | i1-Q4_0 | 2.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/L3.2ColdBrew-ChattyRP-i1-GGUF/resolve/main/L3.2ColdBrew-ChattyRP.i1-Q6_K.gguf) | i1-Q6_K | 2.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
yunguks/walk1009-4bit
yunguks
2024-10-27T11:49:39Z
10
1
transformers
[ "transformers", "safetensors", "exaone", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-10-27T10:26:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hyacinthum/Piidgeon-ai4privacy
hyacinthum
2024-10-27T11:47:52Z
14
1
null
[ "safetensors", "deberta-v2", "NeuralWave", "Hackathon", "en", "de", "fr", "it", "es", "nl", "dataset:ai4privacy/pii-masking-400k", "base_model:iiiorg/piiranha-v1-detect-personal-information", "base_model:finetune:iiiorg/piiranha-v1-detect-personal-information", "license:cc-by-nc-4.0", "region:us" ]
null
2024-10-26T17:59:48Z
--- license: cc-by-nc-4.0 datasets: - ai4privacy/pii-masking-400k language: - en - de - fr - it - es - nl base_model: - iiiorg/piiranha-v1-detect-personal-information tags: - NeuralWave - Hackathon --- ## Overview This model serves to enhance the precision and accuracy of personal information detection by utilizing a reduced label set compared to its base model. Through this refinement, it aims to provide superior labeling precision for identifying personal information across multiple languages. --- ## Features - **Improved Precision**: By reducing the label set size from the base model, the model enhances the precision of the labeling procedure, ensuring more reliable identification of sensitive information. - **Model Versions**: - **Maximum Accuracy Focus**: This version aims to achieve the highest possible accuracy in the detection process, making it suitable for applications where minimizing errors is crucial. - **Maximum Precision Focus**: This variant is designed to maximize the precision of the detection, ideal for scenarios where false positives are particularly undesirable. --- ## Installation To run this model, you will need to install the dependencies: ```bash pip install torch transformers safetensors ``` --- ## Usage Load and run the model using PyTorch and transformers: ```python from transformers import AutoModelForTokenClassification, AutoConfig, BertTokenizerFast from safetensors.torch import load_file # Load the config config = AutoConfig.from_pretrained("folder_to_model") # Initialize the model with the config model = AutoModelForTokenClassification.from_config(config) # Load the safetensors weights state_dict = load_file("folder_to_tensors") # Load the state dict into the model model.load_state_dict(state_dict) # Load the tokenizer tokenizer = BertTokenizerFast.from_pretrained("google-bert/bert-base-multilingual-cased") # Load the label mapper if needed with open("pii_model/label_mapper.json", 'r') as f: label_mapper_data = json.load(f) label_mapper = LabelMapper() label_mapper.label_to_id = label_mapper_data['label_to_id'] label_mapper.id_to_label = {int(k): v for k, v in label_mapper_data['id_to_label'].items()} label_mapper.num_labels = label_mapper_data['num_labels'] # Process outputs for analysis... ``` --- ## Evaluation - **Accuracy Model**: Focused on minimizing errors, evaluates to achieve the highest accuracy metrics. - **Precision Model**: Designed to minimize false positives, optimizing for precision-driven applications. --- ## Disclaimer The publisher of this repository is not affiliated with Ai4Privacy and Ai Suisse SA ## Honorary Mention This repo created during the Hackaton organized by [NeuralWave](https://neuralwave.ch/#/)
mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF
mradermacher
2024-10-27T11:46:07Z
8
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen2", "trl", "sft", "en", "base_model:thanhkt/Qwen2.5-3B-MultiChat-Vi", "base_model:quantized:thanhkt/Qwen2.5-3B-MultiChat-Vi", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-27T11:39:17Z
--- base_model: thanhkt/Qwen2.5-3B-MultiChat-Vi language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/thanhkt/Qwen2.5-3B-MultiChat-Vi <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF/resolve/main/Qwen2.5-3B-MultiChat-Vi.Q2_K.gguf) | Q2_K | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF/resolve/main/Qwen2.5-3B-MultiChat-Vi.Q3_K_S.gguf) | Q3_K_S | 1.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF/resolve/main/Qwen2.5-3B-MultiChat-Vi.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF/resolve/main/Qwen2.5-3B-MultiChat-Vi.Q3_K_L.gguf) | Q3_K_L | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF/resolve/main/Qwen2.5-3B-MultiChat-Vi.IQ4_XS.gguf) | IQ4_XS | 1.9 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF/resolve/main/Qwen2.5-3B-MultiChat-Vi.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF/resolve/main/Qwen2.5-3B-MultiChat-Vi.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF/resolve/main/Qwen2.5-3B-MultiChat-Vi.Q5_K_S.gguf) | Q5_K_S | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF/resolve/main/Qwen2.5-3B-MultiChat-Vi.Q5_K_M.gguf) | Q5_K_M | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF/resolve/main/Qwen2.5-3B-MultiChat-Vi.Q6_K.gguf) | Q6_K | 2.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF/resolve/main/Qwen2.5-3B-MultiChat-Vi.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-3B-MultiChat-Vi-GGUF/resolve/main/Qwen2.5-3B-MultiChat-Vi.f16.gguf) | f16 | 6.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
foreverpiano/mochi
foreverpiano
2024-10-27T11:38:47Z
7
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "diffusers:MochiPipeline", "region:us" ]
null
2024-10-27T11:24:42Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers pipeline that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DeZoomer/Zendaya-FluxLora
DeZoomer
2024-10-27T11:35:03Z
14
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "stable-diffusion", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-27T11:33:21Z
--- tags: - text-to-image - flux - lora - diffusers - stable-diffusion widget: - text: '-' output: url: images/091639_-1_0_image_4_share_00001.webp - text: '-' output: url: images/090422_-1_0_image_4_share_00001.webp - text: '-' output: url: images/091638_-1_0_image_4_share_00001.webp - text: '-' output: url: images/091639_-1_0_image_4_share_00002.webp - text: '-' output: url: images/091639_-1_0_image_4_share_00003.webp base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md inference: parameters: width: 768 height: 1024 --- # Zendaya | Flux <Gallery /> ## Model description Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev). Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed. Example prompt (ComfyUI): *Portrait photo of a woman in a garden.* **Want a custom&#x2F;private LoRA?** Good news—commissions are open! Request yours here: [https:&#x2F;&#x2F;ko-fi.com&#x2F;de_zoomer&#x2F;commissions](https:&#x2F;&#x2F;ko-fi.com&#x2F;de_zoomer&#x2F;commissions). ## Background I&#39;ve been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others. After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I&#39;ve consistently stayed up to date with the latest releases, exchanging knowledge in their communities. My expertise is mainly with characters, so I’m not as familiar with LoRAs for style or anime, although the process might not differ too much. If you want your own custom LoRa, feel free to message me! Commissions are open—check out my Ko-fi link above. Enjoy using my LoRAs and have fun! ## Download model Weights for this model are available in Safetensors format. [Download](/DeZoomer/Zendaya-FluxLora/tree/main) them in the Files & versions tab.
2point5p/krx-qwen2-7b-it-v1
2point5p
2024-10-27T11:31:55Z
7
0
null
[ "safetensors", "qwen2", "text-generation-inference", "unsloth", "trl", "krx", "en", "license:apache-2.0", "region:us" ]
null
2024-10-26T20:44:23Z
--- base_model: unsloth/qwen2-7b-instruct-bnb-4bit tags: - text-generation-inference - unsloth - qwen2 - trl - krx license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** 2point5p - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2-7b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
DeZoomer/TaylorSwift-FluxLora
DeZoomer
2024-10-27T11:31:36Z
1,538
1
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "stable-diffusion", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-27T11:29:55Z
--- tags: - text-to-image - flux - lora - diffusers - stable-diffusion widget: - text: '-' output: url: images/164612_-1_0_image_4_share_00002.webp - text: '-' output: url: images/164613_-1_0_image_4_share_00002.webp - text: '-' output: url: images/171703_-1_0_image_4_share_00001.webp - text: '-' output: url: images/171146_-1_0_image_4_share_00001.webp - text: '-' output: url: images/171414_-1_0_image_4_share_00001.webp - text: '-' output: url: images/164613_-1_0_image_4_share_00004.webp - text: '-' output: url: images/171703_-1_0_image_4_share_00002.webp - text: '-' output: url: images/172240_-1_0_image_4_share_00001.webp - text: '-' output: url: images/172251_-1_0_image_4_share_00001.webp - text: '-' output: url: images/175243_-1_0_image_4_share_00001.webp base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md inference: parameters: width: 768 height: 1024 --- # Taylor Swift | Flux <Gallery /> ## Model description Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev). Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed. Example prompt (ComfyUI): *Portrait photo of a woman in a garden.* **Want a custom&#x2F;private LoRA?** Good news—commissions are open! Request yours here: [https:&#x2F;&#x2F;ko-fi.com&#x2F;de_zoomer&#x2F;commissions](https:&#x2F;&#x2F;ko-fi.com&#x2F;de_zoomer&#x2F;commissions). ## Background I&#39;ve been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others. After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I&#39;ve consistently stayed up to date with the latest releases, exchanging knowledge in their communities. My expertise is mainly with characters, so I’m not as familiar with LoRAs for style or anime, although the process might not differ too much. If you want your own custom LoRa, feel free to message me! Commissions are open—check out my Ko-fi link above. Enjoy using my LoRAs and have fun! ## Download model Weights for this model are available in Safetensors format. [Download](/DeZoomer/TaylorSwift-FluxLora/tree/main) them in the Files & versions tab.
psi-hi/segformer-b0-finetuned-segments-sidewalk-2
psi-hi
2024-10-27T11:30:47Z
34
0
transformers
[ "transformers", "tensorboard", "safetensors", "segformer", "generated_from_trainer", "endpoints_compatible", "region:us" ]
null
2024-10-27T06:53:06Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: segformer-b0-finetuned-segments-sidewalk-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-segments-sidewalk-2 This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
MikeRoz/TheDrummer_Behemoth-123B-v1.1-6.0bpw-h6-exl2
MikeRoz
2024-10-27T11:28:06Z
5
2
null
[ "safetensors", "mistral", "license:other", "6-bit", "exl2", "region:us" ]
null
2024-10-27T06:17:30Z
--- license: other --- # Join our Discord! https://discord.gg/Nbv9pQ88Xb ## Nearly 2000 members strong 💪 --- [BeaverAI](https://huggingface.co/BeaverAI) proudly presents... # Behemoth 123B v1.1 🦣 - Creative Edition *When you spend your whole life living under a dome, even the idea of an ocean seems impossible to imagine.* ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/5405NZoj_ptSMO_qM09EW.png) ## Description > One of the few other models that's done this for me is the OG Command R 35B. So seeing Behemoth v1.1 have a similar feel to that but with much higher general intelligence really makes it a favourite of mine > I was real happy with v1.1 the other day. I've done some tests on v1 and it's a lot better. > v1 had those glimpses of creativity, but now it's more consistent (with v1.1). It feels like a new model in comparison. > v1 had slop bro. v1.1 makes it irrelevant. The jump is like 720p to 4k. Seriously. > The creativity for v1.1 is off the charts compared to v1, like it's juiced. v1 had these moments that I would say... 'Shit, let I never seen a model respond with prose like this, let me regenerate to see what else I get.' Now, even though every regeneration had a flow of possibilities, sometimes, those possibilities never came. v1.1 is comparable to xxx for the first time, every generation. It directs and guides the scene, scenario and characters unlike anything else > It's about the f***ing prose man. The atmosphere that revolves around the characters. Not just the damn dialogue or introspection. v1.1 will pull from a message 7 generations ago. That window I opened will appear in a future response with the noise from the courtyard filtering through it. The experience of not knowing what this model will produce because it's different than anything else is what keeps it engaging. ## Links - Original: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1 - GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1-GGUF - iMatrix: WIP ## Arsenal (Supported Chat Templates) - Mistral - Smart, adaptable, familiar - Metharme (Pygmalion in ST) - Creative, unhinged, unique - Alpaca - Creative, unique, unhinged - Text Completion - You can mix it up and see which works best for you. ### Favorite RP Format `*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV ## What's Next? - Already have plans for a v2! ## Special Thanks - Thank you to each and everyone who donated in [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier. - KinjiHakari777, Dr. Fjut, Kistara, Pseudo, AlexTheVP, Dakkidaze, EvarinSharath'fe, ONTHEREDTEAM, F, Mariana, Garg, Silva, Grozi, & **Phaelon** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/KvyYIIA1zkxQNEdGro007.png) <audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
rufimelo/Legal-BERTimbau-sts-large
rufimelo
2024-10-27T11:25:37Z
42
2
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "pt", "dataset:assin", "dataset:assin2", "dataset:rufimelo/PortugueseLegalSentences-v0", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-07-25T09:52:35Z
--- language: - pt thumbnail: "Portugues BERT for the Legal Domain" pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - transformers datasets: - assin - assin2 - rufimelo/PortugueseLegalSentences-v0 widget: - source_sentence: "O advogado apresentou as provas ao juíz." sentences: - "O juíz leu as provas." - "O juíz leu o recurso." - "O juíz atirou uma pedra." example_title: "Example 1" model-index: - name: BERTimbau results: - task: name: STS type: STS metrics: - name: Pearson Correlation - assin Dataset type: Pearson Correlation value: 0.76629 - name: Pearson Correlation - assin2 Dataset type: Pearson Correlation value: 0.82357 - name: Pearson Correlation - stsb_multi_mt pt Dataset type: Pearson Correlation value: 0.79120 --- # rufimelo/Legal-BERTimbau-sts-large This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. rufimelo/Legal-BERTimbau-sts-large is based on Legal-BERTimbau-large which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large. It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Isto é um exemplo", "Isto é um outro exemplo"] model = SentenceTransformer('rufimelo/Legal-BERTimbau-sts-large') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-BERTimbau-sts-large') model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-sts-large') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results STS | Model| Assin | Assin2|stsb_multi_mt pt| avg| | ---------------------------------------- | ---------- | ---------- |---------- |---------- | | Legal-BERTimbau-sts-base| 0.71457| 0.73545 | 0.72383|0.72462| | Legal-BERTimbau-sts-base-ma| 0.74874 | 0.79532|0.82254 |0.78886| | Legal-BERTimbau-sts-base-ma-v2| 0.75481 | 0.80262|0.82178|0.79307| | Legal-BERTimbau-base-TSDAE-sts|0.78814 |0.81380 |0.75777|0.78657| | Legal-BERTimbau-sts-large| 0.76629| 0.82357 | 0.79120|0.79369| | Legal-BERTimbau-sts-large-v2| 0.76299 | 0.81121|0.81726 |0.79715| | Legal-BERTimbau-sts-large-ma| 0.76195| 0.81622 | 0.82608|0.80142| | Legal-BERTimbau-sts-large-ma-v2| 0.7836| 0.8462| 0.8261| 0.81863| | Legal-BERTimbau-sts-large-ma-v3| 0.7749| **0.8470**| 0.8364| **0.81943**| | Legal-BERTimbau-large-v2-sts| 0.71665| 0.80106| 0.73724| 0.75165| | Legal-BERTimbau-large-TSDAE-sts| 0.72376| 0.79261| 0.73635| 0.75090| | Legal-BERTimbau-large-TSDAE-sts-v2| 0.81326| 0.83130| 0.786314| 0.81029| | Legal-BERTimbau-large-TSDAE-sts-v3|0.80703 |0.82270 |0.77638 |0.80204 | | ---------------------------------------- | ---------- |---------- |---------- |---------- | | BERTimbau base Fine-tuned for STS|**0.78455** | 0.80626|0.82841|0.80640| | BERTimbau large Fine-tuned for STS|0.78193 | 0.81758|0.83784|0.81245| | ---------------------------------------- | ---------- |---------- |---------- |---------- | | paraphrase-multilingual-mpnet-base-v2| 0.71457| 0.79831 |0.83999 |0.78429| | paraphrase-multilingual-mpnet-base-v2 Fine-tuned with assin(s)| 0.77641|0.79831 |**0.84575**|0.80682| ## Training rufimelo/Legal-BERTimbau-sts-large is based on Legal-BERTimbau-large which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) large. It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin) and [assin2](https://huggingface.co/datasets/assin2) datasets. ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors ## Citing & Authors If you use this work, please cite: ```bibtex @inproceedings{souza2020bertimbau, author = {F{\'a}bio Souza and Rodrigo Nogueira and Roberto Lotufo}, title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese}, booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)}, year = {2020} } @inproceedings{fonseca2016assin, title={ASSIN: Avaliacao de similaridade semantica e inferencia textual}, author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S}, booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal}, pages={13--15}, year={2016} } @inproceedings{real2020assin, title={The assin 2 shared task: a quick overview}, author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo}, booktitle={International Conference on Computational Processing of the Portuguese Language}, pages={406--412}, year={2020}, organization={Springer} } @InProceedings{huggingface:dataset:stsb_multi_mt, title = {Machine translated multilingual STS benchmark dataset.}, author={Philip May}, year={2021}, url={https://github.com/PhilipMay/stsb-multi-mt} } ```
tuanbc88/ft-t5-small-nl-2-fol-v1
tuanbc88
2024-10-27T11:15:06Z
116
1
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-27T11:14:29Z
--- library_name: transformers license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: ft-t5-small-nl-2-fol-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft-t5-small-nl-2-fol-v1 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the yuan-yang/MALLS-v0, alevkov95/text2log dataset. It achieves the following results on the evaluation set: - Loss: 1.0732 - Top-1 accuracy: 0.0 - Bleu Score: 0.3056 - Rouge1: 0.5254 - Rouge2: 0.2795 - Rougel: 0.5082 - Rougelsum: 0.5083 - Exact Match: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Top-1 accuracy | Bleu Score | Rouge1 | Rouge2 | Rougel | Rougelsum | Exact Match | |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------:|:------:|:------:|:------:|:---------:|:-----------:| | 1.6921 | 1.0 | 3231 | 1.0767 | 0.0 | 0.3052 | 0.5249 | 0.2786 | 0.5076 | 0.5077 | 0.0 | | 1.688 | 2.0 | 6462 | 1.0741 | 0.0 | 0.3056 | 0.5254 | 0.2795 | 0.5081 | 0.5082 | 0.0 | | 1.679 | 3.0 | 9693 | 1.0734 | 0.0 | 0.3056 | 0.5254 | 0.2796 | 0.5081 | 0.5082 | 0.0 | | 1.6846 | 4.0 | 12924 | 1.0733 | 0.0 | 0.3058 | 0.5255 | 0.2798 | 0.5083 | 0.5083 | 0.0 | | 1.6889 | 5.0 | 16155 | 1.0734 | 0.0 | 0.3056 | 0.5253 | 0.2798 | 0.5082 | 0.5083 | 0.0 | | 1.6725 | 6.0 | 19386 | 1.0733 | 0.0 | 0.3056 | 0.5254 | 0.2799 | 0.5084 | 0.5084 | 0.0 | | 1.6771 | 7.0 | 22617 | 1.0733 | 0.0 | 0.3056 | 0.5254 | 0.2797 | 0.5083 | 0.5083 | 0.0 | | 1.6843 | 8.0 | 25848 | 1.0734 | 0.0 | 0.3056 | 0.5255 | 0.2797 | 0.5084 | 0.5084 | 0.0 | | 1.6651 | 9.0 | 29079 | 1.0733 | 0.0 | 0.3054 | 0.5252 | 0.2795 | 0.5081 | 0.5082 | 0.0 | | 1.7005 | 10.0 | 32310 | 1.0732 | 0.0 | 0.3056 | 0.5254 | 0.2795 | 0.5082 | 0.5083 | 0.0 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.2 - Tokenizers 0.20.1
ajithnarayanan/flant5-large-aio
ajithnarayanan
2024-10-27T11:13:27Z
112
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-27T10:47:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yasinbastug/triage_llm
yasinbastug
2024-10-27T11:12:15Z
81
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-10-27T11:09:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DeZoomer/Rihanna-FluxLora
DeZoomer
2024-10-27T10:51:02Z
17
1
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "stable-diffusion", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-27T10:48:36Z
--- tags: - text-to-image - flux - lora - diffusers - stable-diffusion widget: - text: '-' output: url: images/184632_-1_0_image_4_share_00002.webp - text: '-' output: url: images/184945_-1_0_image_4_share_00003.webp - text: '-' output: url: images/184632_-1_0_image_4_share_00003.webp - text: '-' output: url: images/184632_-1_0_image_4_share_00004.webp - text: '-' output: url: images/184945_-1_0_image_4_share_00007.webp - text: '-' output: url: images/184945_-1_0_image_4_share_00006.webp - text: '-' output: url: images/184946_-1_0_image_4_share_00003.webp - text: '-' output: url: images/184632_-1_0_image_4_share_00001.webp base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md inference: parameters: width: 768 height: 1024 --- # Rihanna | Flux <Gallery /> ## Model description Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev). Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed. Example prompt (ComfyUI): *Portrait photo of a woman in a garden.* **Want a custom&#x2F;private LoRA?** Good news—commissions are open! Request yours here: [https:&#x2F;&#x2F;ko-fi.com&#x2F;de_zoomer&#x2F;commissions](https:&#x2F;&#x2F;ko-fi.com&#x2F;de_zoomer&#x2F;commissions). ## Background I&#39;ve been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others. After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I&#39;ve consistently stayed up to date with the latest releases, exchanging knowledge in their communities. My expertise is mainly with characters, so I’m not as familiar with LoRAs for style or anime, although the process might not differ too much. If you want your own custom LoRa, feel free to message me! Commissions are open—check out my Ko-fi link above. Enjoy using my LoRAs and have fun! ## Download model Weights for this model are available in Safetensors format. [Download](/DeZoomer/Rihanna-FluxLora/tree/main) them in the Files & versions tab.
jebish7/indicbert-B
jebish7
2024-10-27T10:48:37Z
106
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-27T10:48:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
somya-kr/code-llama-7b-lsi-v1.2
somya-kr
2024-10-27T10:46:13Z
6
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:codellama/CodeLlama-7b-hf", "base_model:adapter:codellama/CodeLlama-7b-hf", "license:llama2", "region:us" ]
null
2024-10-24T09:55:24Z
--- base_model: codellama/CodeLlama-7b-hf library_name: peft license: llama2 tags: - trl - sft - generated_from_trainer model-index: - name: code-llama-7b-lsi-v1.2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # code-llama-7b-lsi-v1.2 This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
DeZoomer/KimKardashian-FluxLora
DeZoomer
2024-10-27T10:46:01Z
97
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "stable-diffusion", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-27T10:44:10Z
--- tags: - text-to-image - flux - lora - diffusers - stable-diffusion widget: - text: '-' output: url: images/091628_-1_0_image_4_share_00001.webp - text: '-' output: url: images/091633_-1_0_image_4_share_00005.webp - text: '-' output: url: images/091628_-1_0_image_4_share_00003.webp - text: '-' output: url: images/091628_-1_0_image_4_share_00004.webp - text: '-' output: url: images/091628_-1_0_image_4_share_00005.webp - text: '-' output: url: images/091632_-1_0_image_4_share_00003.webp - text: '-' output: url: images/091632_-1_0_image_4_share_00005.webp - text: '-' output: url: images/090419_-1_0_image_4_share_00001.webp base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md inference: parameters: width: 768 height: 1024 --- # Kim Kardashian | Flux <Gallery /> ## Model description Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev). Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed. Example prompt (ComfyUI): *Portrait photo of a woman in a garden.* **Want a custom&#x2F;private LoRA?** Good news—commissions are open! Request yours here: [https:&#x2F;&#x2F;ko-fi.com&#x2F;de_zoomer&#x2F;commissions](https:&#x2F;&#x2F;ko-fi.com&#x2F;de_zoomer&#x2F;commissions). ## Background I&#39;ve been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others. After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I&#39;ve consistently stayed up to date with the latest releases, exchanging knowledge in their communities. My expertise is mainly with characters, so I’m not as familiar with LoRAs for style or anime, although the process might not differ too much. If you want your own custom LoRa, feel free to message me! Commissions are open—check out my Ko-fi link above. Enjoy using my LoRAs and have fun! ## Download model Weights for this model are available in Safetensors format. [Download](/DeZoomer/KimKardashian-FluxLora/tree/main) them in the Files & versions tab.
DeZoomer/Beyonce-FluxLora
DeZoomer
2024-10-27T10:20:00Z
65
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "stable-diffusion", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-27T10:16:29Z
--- tags: - text-to-image - flux - lora - diffusers - stable-diffusion widget: - text: '-' output: url: images/091621_-1_0_image_4_share_00003.webp - text: '-' output: url: images/091616_-1_0_image_4_share_00003.webp - text: '-' output: url: images/091621_-1_0_image_4_share_00004.webp - text: '-' output: url: images/091622_-1_0_image_4_share_00002.webp - text: '-' output: url: images/091616_-1_0_image_4_share_00005.webp base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md inference: parameters: width: 768 height: 1024 --- # Beyoncé | Flux <Gallery /> ## Model description Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev). Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed. Example prompt (ComfyUI): *Portrait photo of a woman in a garden.* **Want a custom&#x2F;private LoRA?** Good news—commissions are open! Request yours here: [https:&#x2F;&#x2F;ko-fi.com&#x2F;de_zoomer&#x2F;commissions](https:&#x2F;&#x2F;ko-fi.com&#x2F;de_zoomer&#x2F;commissions). ## Background I&#39;ve been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others. After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I&#39;ve consistently stayed up to date with the latest releases, exchanging knowledge in their communities. My expertise is mainly with characters, so I’m not as familiar with LoRAs for style or anime, although the process might not differ too much. If you want your own custom LoRa, feel free to message me! Commissions are open—check out my Ko-fi link above. Enjoy using my LoRAs and have fun! ## Download model Weights for this model are available in Safetensors format. [Download](/DeZoomer/Beyonce-FluxLora/tree/main) them in the Files & versions tab.
mav23/Llama3.1-Gutenberg-Doppel-70B-GGUF
mav23
2024-10-27T10:14:47Z
56
0
transformers
[ "transformers", "gguf", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:nbeerbower/gutenberg2-dpo", "base_model:mlabonne/Hermes-3-Llama-3.1-70B-lorablated", "base_model:quantized:mlabonne/Hermes-3-Llama-3.1-70B-lorablated", "license:llama3.1", "model-index", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-27T02:49:25Z
--- license: llama3.1 library_name: transformers base_model: - mlabonne/Hermes-3-Llama-3.1-70B-lorablated datasets: - jondurbin/gutenberg-dpo-v0.1 - nbeerbower/gutenberg2-dpo model-index: - name: Llama3.1-Gutenberg-Doppel-70B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 70.92 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Llama3.1-Gutenberg-Doppel-70B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 52.56 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Llama3.1-Gutenberg-Doppel-70B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 13.75 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Llama3.1-Gutenberg-Doppel-70B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 12.64 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Llama3.1-Gutenberg-Doppel-70B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 22.68 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Llama3.1-Gutenberg-Doppel-70B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 41.52 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Llama3.1-Gutenberg-Doppel-70B name: Open LLM Leaderboard --- ![image/png](https://huggingface.co/nbeerbower/Mistral-Small-Gutenberg-Doppel-22B/resolve/main/doppel-header?download=true) # Llama3.1-Gutenberg-Doppel-70B [mlabonne/Hermes-3-Llama-3.1-70B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-70B-lorablated) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1) and [nbeerbower/gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo). ### Method [ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 2x H100 for 3 epochs. Thank you [Schneewolf Labs](https://schneewolflabs.com/) for the compute. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__Llama3.1-Gutenberg-Doppel-70B) | Metric |Value| |-------------------|----:| |Avg. |35.68| |IFEval (0-Shot) |70.92| |BBH (3-Shot) |52.56| |MATH Lvl 5 (4-Shot)|13.75| |GPQA (0-shot) |12.64| |MuSR (0-shot) |22.68| |MMLU-PRO (5-shot) |41.52|
allknowingroger/Qwen-modelstock-15B
allknowingroger
2024-10-27T10:14:17Z
6
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "arxiv:2403.19522", "base_model:allknowingroger/Qwen2.5-slerp-14B", "base_model:merge:allknowingroger/Qwen2.5-slerp-14B", "base_model:allknowingroger/Qwenslerp2-14B", "base_model:merge:allknowingroger/Qwenslerp2-14B", "base_model:allknowingroger/Qwenslerp3-14B", "base_model:merge:allknowingroger/Qwenslerp3-14B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-27T10:05:50Z
--- base_model: - allknowingroger/Qwenslerp2-14B - allknowingroger/Qwenslerp3-14B - allknowingroger/Qwen2.5-slerp-14B library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [allknowingroger/Qwenslerp2-14B](https://huggingface.co/allknowingroger/Qwenslerp2-14B) as a base. ### Models Merged The following models were included in the merge: * [allknowingroger/Qwenslerp3-14B](https://huggingface.co/allknowingroger/Qwenslerp3-14B) * [allknowingroger/Qwen2.5-slerp-14B](https://huggingface.co/allknowingroger/Qwen2.5-slerp-14B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: allknowingroger/Qwenslerp2-14B - model: allknowingroger/Qwenslerp3-14B - model: allknowingroger/Qwen2.5-slerp-14B merge_method: model_stock base_model: allknowingroger/Qwenslerp2-14B normalize: false int8_mask: true dtype: bfloat16 ```
DeZoomer/AngelinaJolie-FLuxLora
DeZoomer
2024-10-27T10:01:05Z
6
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "stable-diffusion", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-27T09:58:56Z
--- tags: - text-to-image - flux - lora - diffusers - stable-diffusion widget: - text: '-' output: url: images/004357_-1_0_image_4_share_00001.webp - text: '-' output: url: images/004421_-1_0_image_4_share_00001.webp - text: '-' output: url: images/004731_-1_0_image_4_share_00001.webp - text: '-' output: url: images/004732_-1_0_image_4_share_00002.webp - text: '-' output: url: images/005753_-1_0_image_4_share_00001.webp - text: '-' output: url: images/005753_-1_0_image_4_share_00003.webp base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md inference: parameters: width: 768 height: 1024 --- # Angelina Jolie | Flux <Gallery /> ## Model description Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev). Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed. Example prompt (ComfyUI): *Portrait photo of a woman in a garden.* **Want a custom&#x2F;private LoRA?** Good news—commissions are open! Request yours here: [https:&#x2F;&#x2F;ko-fi.com&#x2F;de_zoomer&#x2F;commissions](https:&#x2F;&#x2F;ko-fi.com&#x2F;de_zoomer&#x2F;commissions). ## Background I&#39;ve been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others. After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I&#39;ve consistently stayed up to date with the latest releases, exchanging knowledge in their communities. My expertise is mainly with characters, so I’m not as familiar with LoRAs for style or anime, although the process might not differ too much. If you want your own custom LoRa, feel free to message me! Commissions are open—check out my Ko-fi link above. Enjoy using my LoRAs and have fun! ## Download model Weights for this model are available in Safetensors format. [Download](/DeZoomer/AngelinaJolie-FLuxLora/tree/main) them in the Files & versions tab.
d4niel92/llama-3.2-1B-orpo
d4niel92
2024-10-27T09:57:07Z
175
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:mlabonne/orpo-dpo-mix-40k", "base_model:meta-llama/Llama-3.2-1B", "base_model:finetune:meta-llama/Llama-3.2-1B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-27T09:50:17Z
--- library_name: transformers datasets: - mlabonne/orpo-dpo-mix-40k base_model: - meta-llama/Llama-3.2-1B --- # Model Card ## Model Description This is a Large Language Model (LLM) trained on a subset of the dataset "mlabonne/orpo-dpo-mix-40k". ## Evaluation Results ### Hellaswag | Metric | Value | | --- | --- | | Accuracy | 0.4517 | ## How to Use To use this model, simply download the checkpoint and load it into your preferred deep learning framework.
DeZoomer/AdrianaLima-FluxLora
DeZoomer
2024-10-27T09:54:58Z
19
1
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "stable-diffusion", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-27T09:51:10Z
--- tags: - text-to-image - flux - lora - diffusers - stable-diffusion widget: - text: '-' output: url: images/091554_-1_0_image_4_share_00007.webp - text: '-' output: url: images/091554_-1_0_image_4_share_00008.webp - text: '-' output: url: images/091607_-1_0_image_4_share_00001.webp - text: '-' output: url: images/091608_-1_0_image_4_share_00002.webp - text: '-' output: url: images/091609_-1_0_image_4_share_00004.webp base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md inference: parameters: width: 768 height: 1024 --- # Adriana Lima | Flux <Gallery /> ## Model description Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev). Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed. Example prompt (ComfyUI): *Portrait photo of a woman in a garden.* **Want a custom/private LoRA?** Good news—commissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions). ## Background I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others. After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities. My expertise is mainly with characters, so I’m not as familiar with LoRAs for style or anime, although the process might not differ too much. If you want your own custom LoRa, feel free to message me! Commissions are open—check out my Ko-fi link above. Enjoy using my LoRAs and have fun! ## Download model Weights for this model are available in Safetensors format. [Download](/DeZoomer/AdrianaLima-FluxLora/tree/main) them in the Files & versions tab.
Sri3010/wav2vec2-large-xls-r-300m-TAMIL-colab
Sri3010
2024-10-27T09:54:02Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-10-26T14:44:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Nessrine9/Finetune2-MiniLM-L12-v2
Nessrine9
2024-10-27T09:45:36Z
7
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:100000", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:sentence-transformers/all-MiniLM-L12-v2", "base_model:finetune:sentence-transformers/all-MiniLM-L12-v2", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-10-27T09:45:26Z
--- base_model: sentence-transformers/all-MiniLM-L12-v2 library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:100000 - loss:CosineSimilarityLoss widget: - source_sentence: A woman wearing a yellow shirt is holding a plate which contains a piece of cake. sentences: - The woman in the yellow shirt might have cut the cake and placed it on the plate. - Male bicyclists compete in the Tour de France. - The man is walking - source_sentence: People gather and talk in the street. sentences: - Club goers outside discussing the police raid. - a woman is leaning on a skateboard - There are many people singing. - source_sentence: A child sliding face first down a metal tube sentences: - A man with a red shirt is bowling with his 2 sons. - The child is sliding face first - There is a girl in a dress. - source_sentence: A man walking a gray poodle is walking past a billboard with a cow on it. sentences: - A house build with wooden stairs and the family is enjoying sitting on them - A woman is playing checkers. - The man is walking his grey cat. - source_sentence: A man fishing in a pointy blue boat on a river lined with palm trees. sentences: - Labrador Retrievers are energetic dogs that will play catch for hours. - A man rubs his bald head. - The man is with friends. model-index: - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2 results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: snli dev type: snli-dev metrics: - type: pearson_cosine value: 0.5002872232214081 name: Pearson Cosine - type: spearman_cosine value: 0.49187589438593304 name: Spearman Cosine - type: pearson_manhattan value: 0.47522303163337404 name: Pearson Manhattan - type: spearman_manhattan value: 0.49169237941097593 name: Spearman Manhattan - type: pearson_euclidean value: 0.47599896939605724 name: Pearson Euclidean - type: spearman_euclidean value: 0.49187587264847454 name: Spearman Euclidean - type: pearson_dot value: 0.5002872256206143 name: Pearson Dot - type: spearman_dot value: 0.49187604689169206 name: Spearman Dot - type: pearson_max value: 0.5002872256206143 name: Pearson Max - type: spearman_max value: 0.49187604689169206 name: Spearman Max --- # SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) <!-- at revision 30ce63ae64e71b9199b3d2eae9de99f64a26eedc --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Nessrine9/Finetune2-MiniLM-L12-v2") # Run inference sentences = [ 'A man fishing in a pointy blue boat on a river lined with palm trees.', 'The man is with friends.', 'A man rubs his bald head.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `snli-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:-------------------|:-----------| | pearson_cosine | 0.5003 | | spearman_cosine | 0.4919 | | pearson_manhattan | 0.4752 | | spearman_manhattan | 0.4917 | | pearson_euclidean | 0.476 | | spearman_euclidean | 0.4919 | | pearson_dot | 0.5003 | | spearman_dot | 0.4919 | | pearson_max | 0.5003 | | **spearman_max** | **0.4919** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 100,000 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 4 tokens</li><li>mean: 16.38 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.56 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:-------------------------------------------------------------------------------|:------------------------------------------|:-----------------| | <code>Three men in an art gallery posing for the camera.</code> | <code>Paintings are nearby.</code> | <code>0.5</code> | | <code>A shirtless man wearing a vest walks on a stage with his arms up.</code> | <code>The man is about to perform.</code> | <code>0.5</code> | | <code>The man is walking outside near a rocky river.</code> | <code>The man is walking</code> | <code>0.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 4 - `fp16`: True - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 4 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | snli-dev_spearman_max | |:------:|:-----:|:-------------:|:---------------------:| | 0.08 | 500 | 0.1842 | 0.3333 | | 0.16 | 1000 | 0.1489 | 0.3449 | | 0.24 | 1500 | 0.1427 | 0.3633 | | 0.32 | 2000 | 0.1391 | 0.3854 | | 0.4 | 2500 | 0.1401 | 0.4015 | | 0.48 | 3000 | 0.139 | 0.3982 | | 0.56 | 3500 | 0.1352 | 0.4327 | | 0.64 | 4000 | 0.1319 | 0.4262 | | 0.72 | 4500 | 0.1336 | 0.4034 | | 0.8 | 5000 | 0.1321 | 0.4021 | | 0.88 | 5500 | 0.1309 | 0.4294 | | 0.96 | 6000 | 0.1271 | 0.4198 | | 1.0 | 6250 | - | 0.4317 | | 1.04 | 6500 | 0.132 | 0.4445 | | 1.12 | 7000 | 0.1296 | 0.4509 | | 1.2 | 7500 | 0.1236 | 0.4559 | | 1.28 | 8000 | 0.1257 | 0.4542 | | 1.3600 | 8500 | 0.1236 | 0.4507 | | 1.44 | 9000 | 0.1277 | 0.4540 | | 1.52 | 9500 | 0.1249 | 0.4664 | | 1.6 | 10000 | 0.1208 | 0.4418 | | 1.6800 | 10500 | 0.1228 | 0.4457 | | 1.76 | 11000 | 0.1212 | 0.4222 | | 1.8400 | 11500 | 0.1203 | 0.4507 | | 1.92 | 12000 | 0.119 | 0.4572 | | 2.0 | 12500 | 0.1196 | 0.4667 | | 2.08 | 13000 | 0.1194 | 0.4733 | | 2.16 | 13500 | 0.1172 | 0.4786 | | 2.24 | 14000 | 0.1172 | 0.4765 | | 2.32 | 14500 | 0.1145 | 0.4717 | | 2.4 | 15000 | 0.1167 | 0.4803 | | 2.48 | 15500 | 0.1177 | 0.4678 | | 2.56 | 16000 | 0.1162 | 0.4805 | | 2.64 | 16500 | 0.1137 | 0.4780 | | 2.7200 | 17000 | 0.1153 | 0.4788 | | 2.8 | 17500 | 0.115 | 0.4784 | | 2.88 | 18000 | 0.1128 | 0.4864 | | 2.96 | 18500 | 0.11 | 0.4812 | | 3.0 | 18750 | - | 0.4823 | | 3.04 | 19000 | 0.1136 | 0.4900 | | 3.12 | 19500 | 0.1135 | 0.4897 | | 3.2 | 20000 | 0.1094 | 0.4856 | | 3.2800 | 20500 | 0.1108 | 0.4889 | | 3.36 | 21000 | 0.1083 | 0.4909 | | 3.44 | 21500 | 0.1133 | 0.4892 | | 3.52 | 22000 | 0.1106 | 0.4910 | | 3.6 | 22500 | 0.1079 | 0.4888 | | 3.68 | 23000 | 0.1091 | 0.4890 | | 3.76 | 23500 | 0.1079 | 0.4822 | | 3.84 | 24000 | 0.1087 | 0.4887 | | 3.92 | 24500 | 0.1066 | 0.4926 | | 4.0 | 25000 | 0.1069 | 0.4919 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.2.1 - Transformers: 4.44.2 - PyTorch: 2.5.0+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.2 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF
mradermacher
2024-10-27T09:32:09Z
21
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:toxibunny/Mistral-Small-22B-ArliAI-RPMax-Diluted", "base_model:quantized:toxibunny/Mistral-Small-22B-ArliAI-RPMax-Diluted", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-27T06:38:08Z
--- base_model: toxibunny/Mistral-Small-22B-ArliAI-RPMax-Diluted language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/toxibunny/Mistral-Small-22B-ArliAI-RPMax-Diluted <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-IQ1_S.gguf) | i1-IQ1_S | 4.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-IQ1_M.gguf) | i1-IQ1_M | 5.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-IQ2_S.gguf) | i1-IQ2_S | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-IQ2_M.gguf) | i1-IQ2_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-Q2_K.gguf) | i1-Q2_K | 8.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 8.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-IQ3_S.gguf) | i1-IQ3_S | 9.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-IQ3_M.gguf) | i1-IQ3_M | 10.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-Q3_K_L.gguf) | i1-Q3_K_L | 11.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-Q4_0.gguf) | i1-Q4_0 | 12.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-Q4_K_S.gguf) | i1-Q4_K_S | 12.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-Q4_K_M.gguf) | i1-Q4_K_M | 13.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-Q5_K_S.gguf) | i1-Q5_K_S | 15.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-Q5_K_M.gguf) | i1-Q5_K_M | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.i1-Q6_K.gguf) | i1-Q6_K | 18.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Primeness/DeezNutz313
Primeness
2024-10-27T09:19:21Z
36
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-27T08:15:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ravi-ednova/merged-model
ravi-ednova
2024-10-27T09:14:36Z
104
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Llama-3.2-1B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-1B-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-27T09:12:11Z
--- base_model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** ravi-ednova - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mrmocciai/genshin-impact
mrmocciai
2024-10-27T09:07:52Z
0
29
null
[ "music", "audio-to-audio", "ja", "license:mit", "region:us" ]
audio-to-audio
2023-06-28T18:23:34Z
--- language: - ja license: mit metrics: - accuracy pipeline_tag: audio-to-audio tags: - music --- # <center> RVC Models Genshin Impact V2 Japanese<br /> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <style> .rounded { border-radius: 15px; /* Anda dapat mengubah nilai ini sesuai kebutuhan */ } </style> </head> <body> <img src="https://huggingface.co/mocci24/RVCV2-GI/resolve/main/model-cover.jpg" alt="Deskripsi gambar" class="rounded"> </body> <div align="center"> <br />OPEN ON [![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/spaces/mocci24/rvc-genshin-v2) </div> --- ## <center> INFO <br /> Model Creator : <br /> ~ <b>[Mocci24](https://youtube.com/@mrmocciai)</b> <br /> ~ <b>[HirumiM](https://huggingface.co/HirumiM)</b> <br /> ---- ## <b>Looking for example song cover from this models?</b><br /> - Song 1 [A thousands years by Kamisato Ayaka (AI)](https://www.bandlab.com/post/082a21f6-000a-ee11-907c-000d3a41ef61)<br /> - Song 3 [Like im gonna lose you by Ayaka ft Kazuha (AI)](https://www.bandlab.com/post/392d1794-6529-ee11-a9bb-000d3a41e8b8)<br /> <p style="color: red;"> There's no sample song because the Channel was takedowned by youtube. There's alternative on BANDLAB, check down below:</p><br /> <div style="display: flex; align-items: center;"> <a href="https://www.bandlab.com/moccito"> <img src="bandlab.png" alt="Icon" style="width: 50px; height: 50px;"> </a> <p style="color: orange; font-weight: bold; margin-left: 10px;">BANDLAB</p> </div> ---- #### <center> RVC V2 Models informations <br /> Trained on Original RVC Training V2 .<br /> Using Pitch Extractions "haverst" and <i><b>"rmvpe".</i></b><br /> Minimum 300 Epoch , 40k Sample rate, and 5-20 minutes dataset with battle voice.<br /> ---- # <center> Currently Models (51 Total) sort by name <br /> ~ Aether 1000 epoch (havest) by HirumiM<br /> ~ Amber 400 epoch (havest) by HirumiM<br /> <b>~ ARLECCHINO 300 EPOCH (rmvpe) by MrMocci</b><br /> ~ Bannett 400 epoch (havest) by HirumiM<br /> <b>~ BEIDOU 400 EPOCH (rmvpe) by HirumiM<br /></b> ~ Candace 400 epoch (havest) by Mocci24<br /> ~ Childe 400 epoch (havest) by HirumiM<br /> <b>~ CHIORI 300 epoch (rmvpe) by Mocci24<br /> ~ CLORINDE 225 EPOCH (rmvpe) by Mocci24</b><br /> ~ Collei 400 epoch (havest) by HirumiM<br /> <b>~ DEHYA 400 EPOCH (rmvpe) by Mocci24<br /> ~ EULA 400 EPOCH (rmvpe) by Mocci24<br /> ~ FARUZAN 400 EPOCH (rmvpe) by HirumiM<br /> ~ FURINA 375 EPOCH (rmvpe) by Mocci24</b><br /> ~ Ganyu 400 epoch (havest) by Mocci24<br /> ~ Hutao 400 epoch (havest) by Mocci24<br /> <b>~ JEAN 400 EPOCH (rmvpe) by Mocci24<br/> ~ KAEDEHARA KAZUHA 400 EPOCH (rmvpe) by HirumiM<br /></b> ~ Kamisato Ayaka 1000 epoch (havest) by Mocci24<br /> <b>~ KAMISATO AYAKA 400 EPOCH (rmvpe) by Hirumim & Mocci24<br /></b> ~ Kaveh 400 epoch (havest) by HirumiM<br /> <b>(Note: Set speaker/singer id to 4, this case only applies to the Kaveh models, for other models, it's still same.)</b><br /> ~ Keqing 400 epoch (havest) by HirumiM<br /> <b>~ KIRARA 400 EPOCH (rmvpe) by Mocci24<br /> ~ KUJO SARA 400 EPOCH (rmvpe) by Mocci24</b><br /> ~ Layla 400 epoch (havest) by HirumiM<br /> <b>~ LYNETTE 400 EPOCH (rmvpe) by Mocci24<br /> ~ LYNEY 400 EPOCH (rmvpe) by Mocci24<br /></b> ~ Lisa 400 epoch (havest) by Mocci24<br /> ~ Lumine 1000 (havest) epoch by HirumiM<br /> <b>~ MAVUIKA 300 EPOCH (rmvpe) by Mocci24</b><br /> ~ Mualani --- epoch (----) by Hanvy12345<br /> ~ Nahida 400 epoch (havest) by HirumiM<br /> <b>~ NAVIA 400 EPOCH (rmvpe) by Mocci24<br /> ~ NEUVILLETTE 400 EPOCH (rmvpe) by Mocci24<br /> ~ NILOU 400 EPOCH (rmvpe) by Mocci24<br /> ~ PAIMON 400 EPOCH (rmvpe) by HirumiM<br /> ~ RAIDEN EI 400 EPOCH (rmpve) by HirumiM<br /> ~ RAIDEN PUPPET 400 EPOCH (rmvpe) by HirumiM<br /></b> ~ Sangonomiya Kokomi 400 epoch (havest) by Mocci24<br /> <b>~ SHENHE 400 EPOCH (rmvpe) by Mocci24<br /> ~ VENTI 400 EPOCH (rmpve) by Mocci24</b><br /> ~ Warderer 400 epoch (havest) by HirumiM<br /> <b>~ WRIOTHESLEY 350 EPOCH (rmvpe) by Mocci24<br /> ~ Xianyun --- epoch (----) by Hanvy12345<br /> ~ XIANGLING 400 EPOCH (rmpve) by Mocci24<br /></b> ~ Xiao 400 epoch (havest) by HirumiM<br /> ~ Xinyan 400 epoch(havest) by HirumiM<br /> ~ Yae Miko 400 epoch (havest) by Mocci24 <br /> ~ Yanfei 400 epoch (havest) by HirumiM<br /> <b>~ YELAN 400 EPOCH (rmvpe) by HirumiM<br /></b> ~ Yoimiya 500 epoch (havest) by Mocci24<br /> ~ Zhongli 400 epoch (havest) by Mocci24<br /> <br /> ---- <br /> <br /> <b>Changes:<br /></b> <br />- Added models Dehya (rmvpe) <br />- Added models Kirara (rmvpe)</i> <br />- Added models Nilou (rmvpe) <br />- Added models Paimon (rmvpe) <br />- Added models Lynette (rmpve) <br />- Added models Venti (rmvpe) <br />- Added models Navia (rmvpe) <br />- Added models Neuvillette (rmvpe) <br />- Added models Faruzan (rmvpe) <br />- Added models Clorinde (rmvpe) <b>"This model still early, dataset for this model still limited, so maybe the training result not good enough"</b><br /> <br />- Added models Kujo Sara (rmvpe) <br />- Added models Lyney (rmvpe) <br />- Added models Shenhe (rmvpe) <br />- Added models Wriothesley (rmvpe) <br />- Updated models Furina (rmvpe) <br />- Added models Chiori (rmvpe) <span style="color: yellow;">NEW!</span> <br />- Added models Arlecchino (rmvpe) <span style="color: yellow;">NEW!</span> <br />- Added models Xianyun (unknown) <span style="color: yellow;">NEW!</span> <br />- Added models Mualani (unknown) <span style="color: yellow;">NEW!</span> <br />- Added models Mavuika (rmvpe) <span style="color: yellow;">NEW!</span> ---- ##### ---- Copy to your colab notebook (run this before run/install requirement.txt): ```bash !apt install git-lfs !git lfs install !git clone https://huggingface.co/mrmocciai/genshin-impact ``` and this ```bash !git clone https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI.git !mkdir -p /content/Retrieval-based-Voice-Conversion-WebUI/logs !cp -r /content/genshin-impact/req/* /content/Retrieval-based-Voice-Conversion-WebUI !mv /content/genshin-impact/model/* /content/Retrieval-based-Voice-Conversion-WebUI/logs !mv /content/Retrieval-based-Voice-Conversion-WebUI/logs/weights/* /content/Retrieval-based-Voice-Conversion-WebUI/weights %cd /content/Retrieval-based-Voice-Conversion-WebUI !mkdir -p pretrained uvr5_weights ``` <br /> ----