modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-06-28 06:27:35
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
500 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-06-28 06:24:42
card
stringlengths
11
1.01M
gradientai/Llama-3-8B-Instruct-262k
gradientai
2024-10-28T20:45:40Z
9,576
257
transformers
[ "transformers", "safetensors", "llama", "text-generation", "meta", "llama-3", "conversational", "en", "arxiv:2309.00071", "arxiv:2402.08268", "arxiv:2305.14233", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-25T06:24:10Z
--- language: - en pipeline_tag: text-generation tags: - meta - llama-3 license: llama3 --- <img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/> # Llama-3 8B Gradient Instruct 262k Join our custom agent and long context (262k-1M+) waitlist: https://forms.gle/L6TDY7dozx8TuoUv7 Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. To learn more or collaborate on a custom model, drop us a message at [email protected]. [Join our Discord](https://discord.com/invite/2QVy2qt2mf) This model extends LLama-3 8B's context length from 8k to > 160K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training (< 200M tokens) by appropriately adjusting RoPE theta. **Update (5/3): We further fine-tuned our model to strengthen its assistant-like chat ability as well. The NIAH result is updated.** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644fac0ce1d7a97f3b653ab1/s9T8L-6Jh5fYH6Q_88r3g.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by a new data-driven RoPE theta optimization technique - Progressive training on increasing context lengths similar to the [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 262144 tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. **Quantized versions and GGUF** GGUF is available on on Crusoe's huggingface account. Check it out here: [crusoeai/Llama-3-8B-Instruct-262k-GGUF](https://huggingface.co/crusoeai/Llama-3-8B-Instruct-262k-GGUF) **Exl2 quantized versions** Exl2 is available on Bullerwins's huggingface account. Check it out here: [8.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_8.0bpw) [6.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_6.0bpw) [5.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_exl2_5.0bpw) **Updated Exl2 quants for 5/3 improved weights** [8.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_8.0bpw) [6.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_6.0bpw) [5.0bpw exl2](https://huggingface.co/bullerwins/gradientai_Llama-3-8B-Instruct-262k_v2_exl2_5.0bpw) **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). We also fine-tune on a chat dataset based on UltraChat [4], following a similar recipe for data augmentation to [2]. **Progressive Training Details:** | Parameter | 65K | 262K | |-----------------------------|----------------|------------| | Initialize From | LLaMA-3-8B-Inst| 65K | | Sequence Length | 2^16 | 2^18 | | RoPE theta | 15.3 M | 207.1 M | | Batch Size (Tokens / Step) | 2.097 M | 4.192 M | | Steps | 30 | 24 | | Total Tokens | 63 M | 101 M | | Learning Rate | 2.00E-05 | 2.00E-05 | | # GPUs | 8 | 32 | | GPU Type | NVIDIA L40S | NVIDIA L40S| **Evaluation Details:** ``` EVAL_MAX_CONTEXT_LENGTH=320200 EVAL_MIN_CONTEXT_LENGTH=100 EVAL_CONTEXT_INTERVAL=16000 EVAL_DEPTH_INTERVAL=0.2 EVAL_NUM_SAMPLES=2 EVAL_RND_NUMBER_DIGITS=8 HAYSTACK: EVAL_GENERATOR_TOKENS=925000 ``` Haystack is "haystack 3", further detailed in this [blog post](https://gradient.ai/blog/the-haystack-matters-for-niah-evals). ## The Gradient AI Team https://gradient.ai/ Gradient is accelerating AI transformation across industries. Our AI Foundry incorporates your data to deploy autonomous assistants that power critical operations across your business. ## Contact Us Drop an email to [[email protected]](mailto:[email protected]) ## Citation ```bibtex @misc{gradientlongcontextllama3, title={Llama 3 Gradient: A series of long context models}, author={Leonid Pekelis and Michael Feil and Forrest Moret and Mark Huang and Tiffany Peng}, year={2024}, url = {https://gradient.ai/blog/scaling-rotational-embeddings-for-long-context-language-models} } ``` ## References [1] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." arXiv preprint arXiv:2309.00071 (2023). [2] Liu, Hao, et al. "World Model on Million-Length Video And Language With RingAttention." arXiv preprint arXiv:2402.08268 (2024). [3] https://github.com/jzhang38/EasyContext [4] Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023. ---- # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
Tombiczek/sentiment_model_distilbert_base_v1
Tombiczek
2024-10-28T20:38:46Z
201
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T20:38:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ychu612/BioClinicalBERT_rsavav_fn_adult2_hq
ychu612
2024-10-28T20:33:51Z
165
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:emilyalsentzer/Bio_ClinicalBERT", "base_model:finetune:emilyalsentzer/Bio_ClinicalBERT", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-10T06:50:14Z
--- library_name: transformers license: mit base_model: emilyalsentzer/Bio_ClinicalBERT tags: - generated_from_trainer model-index: - name: BioClinicalBERT_rsavav_fn_adult2_hq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BioClinicalBERT_rsavav_fn_adult2_hq This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
muhtasham/tajik-llama3-1b-lora-finetuned-gguf
muhtasham
2024-10-28T20:33:12Z
13
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-10-28T20:31:35Z
--- base_model: unsloth/llama-3.2-1b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** muhtasham - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
samanehs/bert_tiny_en_uncased_classifier
samanehs
2024-10-28T20:30:10Z
6
0
keras-hub
[ "keras-hub", "text-classification", "region:us" ]
text-classification
2024-04-17T19:53:33Z
--- library_name: keras-hub pipeline_tag: text-classification --- This is a [`Bert` model](https://keras.io/api/keras_nlp/models/bert) uploaded using the KerasNLP library. This model is related to a `Classifier` task. Model config: * **name:** bert_backbone * **trainable:** True * **vocabulary_size:** 30522 * **num_layers:** 2 * **num_heads:** 2 * **hidden_dim:** 128 * **intermediate_dim:** 512 * **dropout:** 0.1 * **max_sequence_length:** 512 * **num_segments:** 2 This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.
samanehs/test_bert
samanehs
2024-10-28T20:29:49Z
2
0
keras-hub
[ "keras-hub", "text-classification", "region:us" ]
text-classification
2024-04-24T17:10:50Z
--- library_name: keras-hub pipeline_tag: text-classification --- This is a [`Bert` model](https://keras.io/api/keras_nlp/models/bert) uploaded using the KerasNLP library. This model is related to a `Classifier` task. Model config: * **name:** bert_backbone * **trainable:** True * **vocabulary_size:** 30522 * **num_layers:** 2 * **num_heads:** 2 * **hidden_dim:** 128 * **intermediate_dim:** 512 * **dropout:** 0.1 * **max_sequence_length:** 512 * **num_segments:** 2 This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.
samanehs/finetuned_gpt2
samanehs
2024-10-28T20:29:41Z
4
0
keras-hub
[ "keras-hub", "text-generation", "region:us" ]
text-generation
2024-04-29T22:24:27Z
--- library_name: keras-hub pipeline_tag: text-generation --- This is a [`GPT2` model](https://keras.io/api/keras_nlp/models/gpt2) uploaded using the KerasNLP library and can be used with JAX, TensorFlow, and PyTorch backends. This model is related to a `CausalLM` task. Model config: * **name:** gpt2_backbone * **trainable:** True * **vocabulary_size:** 50257 * **num_layers:** 12 * **num_heads:** 12 * **hidden_dim:** 768 * **intermediate_dim:** 3072 * **dropout:** 0.1 * **max_sequence_length:** 1024 This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.
mav23/pygmalion-2-13b-GGUF
mav23
2024-10-28T20:26:12Z
99
0
null
[ "gguf", "text generation", "instruct", "text-generation", "en", "dataset:PygmalionAI/PIPPA", "dataset:Open-Orca/OpenOrca", "dataset:Norquinal/claude_multiround_chat_30k", "dataset:jondurbin/airoboros-gpt4-1.4.1", "dataset:databricks/databricks-dolly-15k", "license:llama2", "region:us" ]
text-generation
2024-10-28T18:49:25Z
--- language: - en thumbnail: null tags: - text generation - instruct pipeline_tag: text-generation inference: false license: llama2 datasets: - PygmalionAI/PIPPA - Open-Orca/OpenOrca - Norquinal/claude_multiround_chat_30k - jondurbin/airoboros-gpt4-1.4.1 - databricks/databricks-dolly-15k --- <h1 style="text-align: center">Pygmalion-2 13B</h1> <h2 style="text-align: center">An instruction-tuned Llama-2 biased towards fiction writing and conversation.</h2> ## Model Details The long-awaited release of our new models based on Llama-2 is finally here. Pygmalion-2 13B (formerly known as Metharme) is based on [Llama-2 13B](https://huggingface.co/meta-llama/llama-2-13b-hf) released by Meta AI. The Metharme models were an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. After much deliberation, we reached the conclusion that the Metharme prompting format is superior (and easier to use) compared to the classic Pygmalion. This model was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions attached. This model is freely available for both commercial and non-commercial use, as per the Llama-2 license. ## Prompting The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history. ### Prompting example The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example: ``` <|system|>Enter RP mode. Pretend to be {{char}} whose persona follows: {{persona}} You shall reply to the user while staying in character, and generate long responses. ``` ## Dataset The dataset used to fine-tune this model includes our own [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA), along with several other instruction datasets, and datasets acquired from various RP forums. ## Limitations and biases The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope. As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading. ## Acknowledgements We would like to thank [SpicyChat](https://spicychat.ai/) for sponsoring the training for this model. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
nihiluis/legal-sachzivil-relations-bert
nihiluis
2024-10-28T20:14:39Z
106
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T20:14:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aialt/MMedL3-MI
aialt
2024-10-28T20:08:33Z
12
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2410.13458", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-19T18:27:50Z
--- license: llama3 --- This repository contains the model of the paper [MedINST: Meta Dataset of Biomedical Instructions](https://huggingface.co/papers/2410.13458). # Citation ``` @inproceedings{han2024medinst, title={MedINST: Meta Dataset of Biomedical Instructions}, author={Han, Wenhan and Fang, Meng and Zhang, Zihan and Yin, Yu and Song, Zirui and Chen, Ling and Pechenizkiy, Mykola and Chen, Qingyu}, booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024", year={2024} } ```
aialt/LLaMA3-MI
aialt
2024-10-28T20:08:11Z
5
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2410.13458", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-06-19T17:53:10Z
--- license: llama3 --- This repository contains the model of the paper [MedINST: Meta Dataset of Biomedical Instructions](https://huggingface.co/papers/2410.13458). # Citation ``` @inproceedings{han2024medinst, title={MedINST: Meta Dataset of Biomedical Instructions}, author={Han, Wenhan and Fang, Meng and Zhang, Zihan and Yin, Yu and Song, Zirui and Chen, Ling and Pechenizkiy, Mykola and Chen, Qingyu}, booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024", year={2024} } ```
BigHuggyD/TheDrummer_Behemoth-123B-v1.1_exl2_8.0bpw_h6
BigHuggyD
2024-10-28T20:03:30Z
6
1
null
[ "safetensors", "mistral", "license:other", "8-bit", "exl2", "region:us" ]
null
2024-10-28T19:19:02Z
--- license: other --- # Join our Discord! https://discord.gg/Nbv9pQ88Xb ## Nearly 2000 members strong 💪 --- [BeaverAI](https://huggingface.co/BeaverAI) proudly presents... # Behemoth 123B v1.1 🦣 - Creative Edition *When you spend your whole life living under a dome, even the idea of an ocean seems impossible to imagine.* ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/5405NZoj_ptSMO_qM09EW.png) ## Description > One of the few other models that's done this for me is the OG Command R 35B. So seeing Behemoth v1.1 have a similar feel to that but with much higher general intelligence really makes it a favourite of mine > I was real happy with v1.1 the other day. I've done some tests on v1 and it's a lot better. > v1 had those glimpses of creativity, but now it's more consistent (with v1.1). It feels like a new model in comparison. > v1 had slop bro. v1.1 makes it irrelevant. The jump is like 720p to 4k. Seriously. > The creativity for v1.1 is off the charts compared to v1, like it's juiced. v1 had these moments that I would say... 'Shit, let I never seen a model respond with prose like this, let me regenerate to see what else I get.' Now, even though every regeneration had a flow of possibilities, sometimes, those possibilities never came. v1.1 is comparable to xxx for the first time, every generation. It directs and guides the scene, scenario and characters unlike anything else > It's about the f***ing prose man. The atmosphere that revolves around the characters. Not just the damn dialogue or introspection. v1.1 will pull from a message 7 generations ago. That window I opened will appear in a future response with the noise from the courtyard filtering through it. The experience of not knowing what this model will produce because it's different than anything else is what keeps it engaging. ## Links - Original: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1 - GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1-GGUF - iMatrix: https://huggingface.co/bartowski/Behemoth-123B-v1.1-GGUF (recommended for smaller quants) ## Arsenal (Supported Chat Templates) - Mistral - Smart, adaptable, familiar - Metharme (Pygmalion in ST) - Creative, unhinged, unique - Alpaca - Creative, unique, unhinged - Text Completion - You can mix it up and see which works best for you. ### Favorite RP Format `*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV ## What's Next? - Already have plans for a v2! ## Special Thanks - Thank you to each and everyone who donated in [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier. - KinjiHakari777, Dr. Fjut, Kistara, Pseudo, AlexTheVP, Dakkidaze, EvarinSharath'fe, ONTHEREDTEAM, F, Mariana, Garg, Silva, Grozi, & **Phaelon** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/KvyYIIA1zkxQNEdGro007.png) <audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
nihiluis/legal-sach-relations-bert
nihiluis
2024-10-28T19:54:50Z
87
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T19:54:20Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hung200504/bert-squadv2
hung200504
2024-10-28T19:52:26Z
82
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "en", "dataset:squad_v2", "base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext", "base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
question-answering
2023-10-23T07:01:23Z
--- license: mit base_model: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: bert-squadv2-biomed results: - task: type: question-answering dataset: type: reading-comprehension name: SQuADv2 metrics: - name: accuracy type: accuracy value: 0.88 verified: false language: - en pipeline_tag: question-answering --- # bert-squadv2-biomed This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the SQuADv2 dataset. It has been fine-tuned for question-answering tasks specifically related to biomedical texts, leveraging the SQuAD v2 dataset to enhance its ability to manage both answerable and unanswerable questions. ## Model Description The base model, **PubMedBERT**, was originally pre-trained on biomedical abstracts and full-text articles from PubMed. This fine-tuned version adapts PubMedBERT for biomedical question-answering by training it with **SQuADv2**, a dataset that includes over 100,000 questions with answerable and unanswerable queries. - **Use Cases**: This model is particularly useful in applications where quick and accurate question-answering from biomedical literature is needed. It is designed to provide answers to specific questions, as well as to detect when no relevant answer exists. ## Training and Evaluation Data - **Dataset**: The model was fine-tuned on the **SQuADv2** dataset, which consists of reading comprehension tasks where some questions have no answer in the provided context. - **Training Environment**: The model was trained in a Colab environment. A link to the training notebook can be found here: [Training Notebook](https://colab.research.google.com/drive/11je7-YnFQ-oISxC_7KS4QTfs3fgWOseU?usp=sharing). ## Training Procedure ### Hyperparameters The following hyperparameters were used during training: - `learning_rate`: 3e-05 - `train_batch_size`: 16 - `eval_batch_size`: 16 - `seed`: 42 - `optimizer`: Adam (betas=(0.9, 0.999), epsilon=1e-08) - `lr_scheduler_type`: linear - `num_epochs`: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.9623 | 0.02 | 5 | 5.8084 | | 5.6934 | 0.04 | 10 | 5.4377 | | 5.2457 | 0.06 | 15 | 4.8548 | | 4.5796 | 0.08 | 20 | 4.2851 | | 4.1507 | 0.1 | 25 | 3.9911 | | 4.1134 | 0.12 | 30 | 3.7444 | | 3.8076 | 0.14 | 35 | 3.5019 | | 3.8445 | 0.16 | 40 | 3.0715 | | 3.0969 | 0.18 | 45 | 2.6475 | | 2.8899 | 0.2 | 50 | 2.5662 | | 2.8354 | 0.22 | 55 | 2.3382 | | 3.1775 | 0.24 | 60 | 2.2028 | | 2.3935 | 0.26 | 65 | 2.2038 | | 2.3994 | 0.28 | 70 | 1.9708 | | 2.2664 | 0.3 | 75 | 1.9092 | | 1.8134 | 0.32 | 80 | 1.9546 | | 2.1905 | 0.34 | 85 | 1.8623 | | 2.3941 | 0.36 | 90 | 1.7622 | | 1.8807 | 0.38 | 95 | 1.7976 | | 2.3562 | 0.4 | 100 | 1.7311 | | 2.1116 | 0.42 | 105 | 1.6848 | | 1.8022 | 0.44 | 110 | 1.6636 | | 2.0378 | 0.46 | 115 | 1.6401 | | 1.7313 | 0.48 | 120 | 1.6013 | | 1.9304 | 0.5 | 125 | 1.5312 | | 1.7668 | 0.52 | 130 | 1.4995 | | 1.908 | 0.54 | 135 | 1.5222 | | 1.9348 | 0.56 | 140 | 1.5180 | | 1.7307 | 0.58 | 145 | 1.4694 | | 1.9088 | 0.6 | 150 | 1.4597 | | 1.3283 | 0.62 | 155 | 1.4631 | | 1.6898 | 0.64 | 160 | 1.4715 | | 1.7079 | 0.66 | 165 | 1.4565 | | 1.6261 | 0.68 | 170 | 1.4246 | | 1.5628 | 0.7 | 175 | 1.4248 | | 1.7642 | 0.72 | 180 | 1.4261 | | 1.5168 | 0.74 | 185 | 1.4088 | | 1.5967 | 0.76 | 190 | 1.4028 | | 1.275 | 0.78 | 195 | 1.4294 | | 1.596 | 0.8 | 200 | 1.4128 | | 1.5765 | 0.82 | 205 | 1.4032 | | 1.6554 | 0.84 | 210 | 1.3599 | | 1.785 | 0.86 | 215 | 1.3221 | | 1.4147 | 0.88 | 220 | 1.3299 | | 1.4364 | 0.9 | 225 | 1.3510 | | 1.6059 | 0.92 | 230 | 1.2959 | | 1.305 | 0.94 | 235 | 1.2871 | | 1.4614 | 0.96 | 240 | 1.2986 | | 1.3531 | 0.98 | 245 | 1.3891 | | 1.3192 | 1.0 | 250 | 1.3526 | | 1.0726 | 1.02 | 255 | 1.3378 | | 1.1724 | 1.04 | 260 | 1.3207 | | 1.2818 | 1.06 | 265 | 1.3034 | | 1.1 | 1.08 | 270 | 1.2991 | | 1.0719 | 1.1 | 275 | 1.2799 | | 1.231 | 1.12 | 280 | 1.2880 | | 1.3378 | 1.14 | 285 | 1.3066 | | 1.0818 | 1.16 | 290 | 1.2954 | | 1.0873 | 1.18 | 295 | 1.2754 | | 1.1567 | 1.2 | 300 | 1.2741 | | 1.1031 | 1.22 | 305 | 1.2502 | | 1.1391 | 1.24 | 310 | 1.2674 | | 1.2142 | 1.26 | 315 | 1.2849 | | 0.9893 | 1.28 | 320 | 1.2841 | | 1.0846 | 1.3 | 325 | 1.2748 | | 1.2535 | 1.32 | 330 | 1.2628 | | 1.1309 | 1.34 | 335 | 1.2410 | | 0.9969 | 1.36 | 340 | 1.2267 | | 1.0932 | 1.38 | 345 | 1.2032 | | 1.4972 | 1.4 | 350 | 1.1923 | | 0.9547 | 1.42 | 355 | 1.1954 | | 1.1322 | 1.44 | 360 | 1.2043 | | 0.8833 | 1.46 | 365 | 1.2234 | | 0.7986 | 1.48 | 370 | 1.2600 | | 1.1929 | 1.5 | 375 | 1.2788 | | 0.9585 | 1.52 | 380 | 1.2554 | | 1.3862 | 1.54 | 385 | 1.2165 | | 1.1168 | 1.56 | 390 | 1.2064 | | 1.135 | 1.58 | 395 | 1.1976 | | 0.8741 | 1.6 | 400 | 1.1933 | | 1.3593 | 1.62 | 405 | 1.1857 | | 1.0084 | 1.64 | 410 | 1.1851 | | 0.9579 | 1.66 | 415 | 1.1728 | | 0.9541 | 1.68 | 420 | 1.1721 | | 1.2569 | 1.7 | 425 | 1.1773 | | 1.0629 | 1.72 | 430 | 1.1717 | | 1.1233 | 1.74 | 435 | 1.1671 | | 0.8304 | 1.76 | 440 | 1.1742 | | 0.8097 | 1.78 | 445 | 1.1861 | | 0.9703 | 1.8 | 450 | 1.1822 | | 1.1413 | 1.82 | 455 | 1.1909 | | 1.0977 | 1.84 | 460 | 1.1938 | | 1.0375 | 1.86 | 465 | 1.1839 | | 1.0726 | 1.88 | 470 | 1.1871 | | 1.1322 | 1.9 | 475 | 1.2020 | | 1.0286 | 1.92 | 480 | 1.2004 | | 0.9395 | 1.94 | 485 | 1.1981 | | 1.059 | 1.96 | 490 | 1.1772 | | 1.0722 | 1.98 | 495 | 1.1568 | | 0.8618 | 2.0 | 500 | 1.1475 | | 0.9305 | 2.02 | 505 | 1.1554 | | 0.8525 | 2.04 | 510 | 1.1740 | | 1.0687 | 2.06 | 515 | 1.1759 | | 0.8899 | 2.08 | 520 | 1.1647 | | 0.6881 | 2.1 | 525 | 1.1755 | | 0.8582 | 2.12 | 530 | 1.1920 | | 0.6645 | 2.14 | 535 | 1.1952 | | 0.6028 | 2.16 | 540 | 1.2121 | | 0.7364 | 2.18 | 545 | 1.2169 | | 0.5562 | 2.2 | 550 | 1.2278 | | 0.6175 | 2.22 | 555 | 1.2413 | | 0.5392 | 2.24 | 560 | 1.2466 | | 0.8727 | 2.26 | 565 | 1.2362 | | 0.6778 | 2.28 | 570 | 1.2253 | | 0.685 | 2.3 | 575 | 1.2254 | | 0.8991 | 2.32 | 580 | 1.2181 | | 1.0157 | 2.34 | 585 | 1.2044 | | 0.5054 | 2.36 | 590 | 1.1943 | | 0.8036 | 2.38 | 595 | 1.1950 | | 0.6207 | 2.4 | 600 | 1.2025 | | 0.6828 | 2.42 | 605 | 1.2178 | | 0.8008 | 2.44 | 610 | 1.2312 | | 0.739 | 2.46 | 615 | 1.2401 | | 0.5479 | 2.48 | 620 | 1.2459 | | 0.9443 | 2.5 | 625 | 1.2359 | | 0.7468 | 2.52 | 630 | 1.2264 | | 0.6803 | 2.54 | 635 | 1.2223 | | 0.8997 | 2.56 | 640 | 1.2208 | | 0.7044 | 2.58 | 645 | 1.2118 | | 0.707 | 2.6 | 650 | 1.2076 | | 0.7813 | 2.62 | 655 | 1.2072 | | 0.6376 | 2.64 | 660 | 1.2122 | | 0.8885 | 2.66 | 665 | 1.2141 | | 0.7359 | 2.68 | 670 | 1.2121 | | 0.6928 | 2.7 | 675 | 1.2113 | | 0.7706 | 2.72 | 680 | 1.2082 | | 0.884 | 2.74 | 685 | 1.2033 | | 0.6362 | 2.76 | 690 | 1.1991 | | 0.8517 | 2.78 | 695 | 1.1959 | | 0.7713 | 2.8 | 700 | 1.1954 | | 0.8654 | 2.82 | 705 | 1.1945 | | 0.6268 | 2.84 | 710 | 1.1923 | | 0.8246 | 2.86 | 715 | 1.1919 | | 0.646 | 2.88 | 720 | 1.1920 | | 0.8648 | 2.9 | 725 | 1.1922 | | 0.8398 | 2.92 | 730 | 1.1928 | | 0.6281 | 2.94 | 735 | 1.1931 | | 0.6319 | 2.96 | 740 | 1.1927 | | 0.6304 | 2.98 | 745 | 1.1932 | | 0.6554 | 3.0 | 750 | 1.1930 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
tomaszkii/model-v5
tomaszkii
2024-10-28T19:50:33Z
45
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T19:46:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
glif-loradex-trainer/i12bp8_appelsiensam_flashtattoo
glif-loradex-trainer
2024-10-28T19:48:44Z
86
2
diffusers
[ "diffusers", "text-to-image", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:finetune:black-forest-labs/FLUX.1-dev", "license:other", "region:us", "flux", "lora", "base_model:adapter:black-forest-labs/FLUX.1-dev" ]
text-to-image
2024-10-28T19:48:24Z
--- tags: - diffusers - text-to-image - template:sd-lora - base_model:black-forest-labs/FLUX.1-dev - base_model:finetune:black-forest-labs/FLUX.1-dev - license:other - region:us - flux - lora widget: - output: url: samples/1730144840164__000001500_0.jpg text: sloth driving a car, flshtt - output: url: samples/1730144864716__000001500_1.jpg text: cyborg samurai flshtt - output: url: samples/1730144889260__000001500_2.jpg text: a bird catching a worm, flshtt base_model: black-forest-labs/FLUX.1-dev trigger: flshtt instance_prompt: flshtt license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # appelsiensam_flashtattoo Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `i12bp8`. <Gallery /> ## Trigger words You should use `flshtt` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/glif-loradex-trainer/i12bp8_appelsiensam_flashtattoo/tree/main) them in the Files & versions tab. ## License This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
MaziyarPanahi/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-GGUF
MaziyarPanahi
2024-10-28T19:43:45Z
66
2
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored", "base_model:quantized:aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored", "region:us", "conversational" ]
text-generation
2024-10-28T19:11:11Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-GGUF base_model: aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored inference: false model_creator: aifeifei798 pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-GGUF](https://huggingface.co/MaziyarPanahi/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-GGUF) - Model creator: [aifeifei798](https://huggingface.co/aifeifei798) - Original model: [aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored](https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored) ## Description [MaziyarPanahi/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-GGUF](https://huggingface.co/MaziyarPanahi/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored-GGUF) contains GGUF format model files for [aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored](https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
compstak/classify-google-augment-3
compstak
2024-10-28T19:37:15Z
46
0
null
[ "tensorboard", "safetensors", "vit", "autotrain", "image-classification", "base_model:google/vit-large-patch16-224", "base_model:finetune:google/vit-large-patch16-224", "region:us" ]
image-classification
2024-10-28T19:20:05Z
--- tags: - autotrain - image-classification base_model: google/vit-large-patch16-224 widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics loss: 0.3821594715118408 f1_macro: 0.8644630522360383 f1_micro: 0.886969696969697 f1_weighted: 0.8837489529217776 precision_macro: 0.8700338902181693 precision_micro: 0.886969696969697 precision_weighted: 0.8838390180385471 recall_macro: 0.8628333333333335 recall_micro: 0.886969696969697 recall_weighted: 0.886969696969697 accuracy: 0.886969696969697
arjunMadhu1995/emotion_tweet_distilbert-base-uncased_2024-10-28
arjunMadhu1995
2024-10-28T19:34:01Z
152
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T01:30:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
slokesha/vit-base-patch16-224-in21k
slokesha
2024-10-28T19:33:58Z
198
0
transformers
[ "transformers", "tensorboard", "safetensors", "optimum_habana", "vit", "image-classification", "vision", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-28T19:18:30Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - vision - generated_from_trainer model-index: - name: vit-base-patch16-224-in21k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the chainyo/rvl-cdip dataset. It achieves the following results on the evaluation set: - eval_loss: 2.7757 - eval_model_preparation_time: 0.0119 - eval_accuracy: 0.0567 - eval_runtime: 362.8091 - eval_samples_per_second: 132.301 - eval_steps_per_second: 2.067 - memory_allocated (GB): 0.79 - max_memory_allocated (GB): 0.87 - total_memory_available (GB): 94.62 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.0a0+git74cd574 - Datasets 3.0.2 - Tokenizers 0.20.1
emire666/sai-ual
emire666
2024-10-28T19:31:20Z
5
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-28T19:31:16Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: sai_ual license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # sai_ual A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `sai_ual` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
g-assismoraes/mdeberta-domain_EN_fold3
g-assismoraes
2024-10-28T19:26:32Z
146
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T19:22:57Z
--- library_name: transformers license: mit base_model: microsoft/deberta-v3-base tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: mdeberta-domain_EN_fold3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-domain_EN_fold3 This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4288 - Accuracy: 0.8414 - Precision: 0.7835 - Recall: 0.7813 - F1: 0.7618 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.0342 | 1.0 | 19 | 0.8300 | 0.5931 | 0.8644 | 0.3333 | 0.2482 | | 0.7812 | 2.0 | 38 | 0.6683 | 0.6414 | 0.8744 | 0.4111 | 0.3821 | | 0.6431 | 3.0 | 57 | 0.6047 | 0.7793 | 0.8260 | 0.6406 | 0.5553 | | 0.6002 | 4.0 | 76 | 0.5521 | 0.7931 | 0.8316 | 0.6636 | 0.6016 | | 0.4757 | 5.0 | 95 | 0.4576 | 0.7862 | 0.6713 | 0.6974 | 0.6574 | | 0.4112 | 6.0 | 114 | 0.5542 | 0.7517 | 0.6641 | 0.7157 | 0.6721 | | 0.34 | 7.0 | 133 | 0.4608 | 0.8069 | 0.7236 | 0.7246 | 0.7004 | | 0.2907 | 8.0 | 152 | 0.4542 | 0.7931 | 0.7067 | 0.7470 | 0.7149 | | 0.2521 | 9.0 | 171 | 0.4539 | 0.8138 | 0.7410 | 0.7811 | 0.7514 | | 0.2023 | 10.0 | 190 | 0.4288 | 0.8414 | 0.7835 | 0.7813 | 0.7618 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.20.1
g-assismoraes/mdeberta-domain_EN_fold1
g-assismoraes
2024-10-28T19:19:35Z
140
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T19:16:21Z
--- library_name: transformers license: mit base_model: microsoft/deberta-v3-base tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: mdeberta-domain_EN_fold1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-domain_EN_fold1 This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5362 - Accuracy: 0.8288 - Precision: 0.7887 - Recall: 0.7656 - F1: 0.7752 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.0312 | 1.0 | 19 | 0.8773 | 0.5890 | 0.8630 | 0.3333 | 0.2471 | | 0.7596 | 2.0 | 38 | 0.7653 | 0.5890 | 0.8630 | 0.3333 | 0.2471 | | 0.7097 | 3.0 | 57 | 0.7352 | 0.5890 | 0.8630 | 0.3333 | 0.2471 | | 0.6673 | 4.0 | 76 | 0.7382 | 0.7466 | 0.6626 | 0.5889 | 0.5519 | | 0.6028 | 5.0 | 95 | 0.7362 | 0.7740 | 0.6837 | 0.6406 | 0.5884 | | 0.4939 | 6.0 | 114 | 0.6345 | 0.7466 | 0.6145 | 0.6034 | 0.5967 | | 0.3969 | 7.0 | 133 | 0.5446 | 0.8014 | 0.7220 | 0.7140 | 0.6938 | | 0.3291 | 8.0 | 152 | 0.5437 | 0.8082 | 0.7452 | 0.7468 | 0.7410 | | 0.2975 | 9.0 | 171 | 0.5534 | 0.7945 | 0.7437 | 0.7101 | 0.7235 | | 0.2573 | 10.0 | 190 | 0.5362 | 0.8288 | 0.7887 | 0.7656 | 0.7752 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.20.1
PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B
PJMixers-Dev
2024-10-28T19:16:04Z
6
0
null
[ "safetensors", "llama", "en", "dataset:PJMixers-Dev/HailMary-v0.1-KTO", "base_model:PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B", "base_model:finetune:PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B", "license:llama3.2", "model-index", "region:us" ]
null
2024-10-28T01:48:20Z
--- license: llama3.2 language: - en datasets: - PJMixers-Dev/HailMary-v0.1-KTO base_model: - PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B model-index: - name: PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 65.04 name: strict accuracy source: url: https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B-details name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 22.29 name: normalized accuracy source: url: https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B-details name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 11.78 name: exact match source: url: https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B-details name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 2.91 name: acc_norm source: url: https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B-details name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 4.69 name: acc_norm source: url: https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B-details name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 23.42 name: accuracy source: url: https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B-details name: Open LLM Leaderboard --- [PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B) was further trained using KTO (with `apo_zero_unpaired` loss type) using a mix of instruct, RP, and storygen datasets. I created rejected samples by using the SFT with bad settings (including logit bias) for every model turn. The model was only trained at `max_length=6144`, and is nowhere near a full epoch as it eventually crashed. So think of this like a test of a test. # W&B Training Logs ![train/rewards/chosen/rejected](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B/resolve/main/images/train_rewards_chosen_rejected.png) ![train/rewards/margins](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B/resolve/main/images/train_rewards_margins.png) ![train/logits/chosen/rejected](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B/resolve/main/images/train_logits_chosen_rejected.png) ![train/logps/chosen/rejected](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B/resolve/main/images/train_logps_chosen_rejected.png) ![train/loss](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B/resolve/main/images/train_loss.png) ![train/grad_norm](https://huggingface.co/PJMixers-Dev/LLaMa-3.2-Instruct-JankMix-v0.2-SFT-HailMary-v0.1-KTO-3B/resolve/main/images/train_grad_norm.png) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/PJMixers-Dev__LLaMa-3.2-Instruct-JankMix-v0.2-SFT-3B-details) | Metric |Value| |-------------------|----:| |Avg. |21.69| |IFEval (0-Shot) |65.04| |BBH (3-Shot) |22.29| |MATH Lvl 5 (4-Shot)|11.78| |GPQA (0-shot) | 2.91| |MuSR (0-shot) | 4.69| |MMLU-PRO (5-shot) |23.42|
g-assismoraes/mdeberta-domain_fold4
g-assismoraes
2024-10-28T19:09:14Z
134
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/mdeberta-v3-base", "base_model:finetune:microsoft/mdeberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T18:44:16Z
--- library_name: transformers license: mit base_model: microsoft/mdeberta-v3-base tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: mdeberta-domain_fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-domain_fold4 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3705 - Accuracy: 0.8552 - Precision: 0.8128 - Recall: 0.8276 - F1: 0.8194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 1.0349 | 1.0 | 19 | 0.9208 | 0.5931 | 0.8644 | 0.3333 | 0.2482 | | 0.88 | 2.0 | 38 | 0.7011 | 0.5931 | 0.8644 | 0.3333 | 0.2482 | | 0.68 | 3.0 | 57 | 0.6370 | 0.5931 | 0.8644 | 0.3333 | 0.2482 | | 0.6179 | 4.0 | 76 | 0.5360 | 0.8 | 0.6860 | 0.6971 | 0.6459 | | 0.4709 | 5.0 | 95 | 0.3949 | 0.8483 | 0.7967 | 0.7860 | 0.7852 | | 0.3643 | 6.0 | 114 | 0.3526 | 0.8690 | 0.8279 | 0.8209 | 0.8236 | | 0.2901 | 7.0 | 133 | 0.3713 | 0.8690 | 0.8269 | 0.8277 | 0.8242 | | 0.2414 | 8.0 | 152 | 0.3506 | 0.8759 | 0.8394 | 0.8392 | 0.8374 | | 0.1941 | 9.0 | 171 | 0.3766 | 0.8621 | 0.8206 | 0.8391 | 0.8290 | | 0.1977 | 10.0 | 190 | 0.3705 | 0.8552 | 0.8128 | 0.8276 | 0.8194 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.20.1
Tsunami-th/Tsunami-1.0-7B-Instruct
Tsunami-th
2024-10-28T19:04:11Z
40
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "th", "en", "base_model:Qwen/Qwen2.5-7B", "base_model:finetune:Qwen/Qwen2.5-7B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T18:58:45Z
--- language: - th - en license: apache-2.0 library_name: transformers base_model: - Qwen/Qwen2.5-7B-Instruct - Qwen/Qwen2.5-7B pipeline_tag: text-generation --- <img src="./Tsunami.webp" alt="Tsunami Model" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Tsunami-1.0-7B-Instruct **TSUNAMI**: Transformative Semantic Understanding and Natural Augmentation Model for Intelligence. **TSUNAMI** full name was created by ChatGPT. --- ### infomation **Tsunami-1.0-7B-Instruct** is Thai Large Language Model that fine-tuned from **Qwen2.5-7B** in Thai dataset. --- ### Author - Pollakrit Lorprasertkul | [email protected] --- ### Prompt Template This model uses `ChatML` prompt template: ``` <|im_start|>system {System}<|im_end|> <|im_start|>user {User}<|im_end|> <|im_start|>assistant {Assistant} ```` --- ### How to use ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name = "Tsunami-th/Tsunami-1.0-7B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "สวัสดีครับ"} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = tokenizer(text, return_tensors="pt") inputs = inputs.to(model.device) with torch.no_grad(): output = model.generate(**inputs, max_new_tokens=512) response = tokenizer.decode(output[0, len(inputs['input_ids'][0]):], skip_special_tokens=True) ``` ---
vapari/wav2vec2-base-finetuned-gtzan
vapari
2024-10-28T19:03:05Z
8
0
transformers
[ "transformers", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:facebook/wav2vec2-base", "base_model:finetune:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
audio-classification
2024-10-10T16:33:11Z
--- library_name: transformers license: apache-2.0 base_model: facebook/wav2vec2-base tags: - generated_from_trainer datasets: - marsyas/gtzan metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-gtzan results: - task: name: Audio Classification type: audio-classification dataset: name: GTZAN type: marsyas/gtzan config: all split: train args: all metrics: - name: Accuracy type: accuracy value: 0.7866666666666666 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-gtzan This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 1.2095 - Accuracy: 0.7867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0746 | 1.0 | 107 | 1.9697 | 0.46 | | 1.5843 | 2.0 | 214 | 1.5908 | 0.5067 | | 1.5982 | 3.0 | 321 | 1.4385 | 0.58 | | 1.2855 | 4.0 | 428 | 1.3906 | 0.5467 | | 1.0562 | 5.0 | 535 | 1.0173 | 0.7 | | 0.8919 | 6.0 | 642 | 0.9564 | 0.6733 | | 0.7214 | 7.0 | 749 | 0.8906 | 0.7467 | | 0.7624 | 8.0 | 856 | 0.9580 | 0.7467 | | 0.3619 | 9.0 | 963 | 1.0685 | 0.7733 | | 0.3814 | 10.0 | 1070 | 1.1847 | 0.7467 | | 0.4371 | 11.0 | 1177 | 0.9630 | 0.7867 | | 0.3186 | 12.0 | 1284 | 0.9635 | 0.82 | | 0.1474 | 13.0 | 1391 | 1.0021 | 0.8333 | | 0.0918 | 14.0 | 1498 | 1.4497 | 0.7533 | | 0.0592 | 15.0 | 1605 | 1.2592 | 0.7733 | | 0.0084 | 16.0 | 1712 | 1.2656 | 0.7867 | | 0.0216 | 17.0 | 1819 | 1.2095 | 0.7867 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.0
mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF
mradermacher
2024-10-28T19:01:07Z
33
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:DavidAU/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct", "base_model:quantized:DavidAU/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct", "endpoints_compatible", "region:us", "imatrix" ]
null
2024-10-28T16:09:34Z
--- base_model: DavidAU/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/DavidAU/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 4.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 4.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 7.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 8.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 8.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 9.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 9.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 10.2 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 10.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 10.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 11.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 13.3 | | | [GGUF](https://huggingface.co/mradermacher/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct-i1-GGUF/resolve/main/MN-WORDSTORM-pt2-RCM-Escape-Room-18.5B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 15.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
MaziyarPanahi/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF
MaziyarPanahi
2024-10-28T18:49:57Z
63
0
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored", "base_model:quantized:aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored", "region:us", "conversational" ]
text-generation
2024-10-28T18:24:01Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF base_model: aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored inference: false model_creator: aifeifei798 pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF](https://huggingface.co/MaziyarPanahi/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF) - Model creator: [aifeifei798](https://huggingface.co/aifeifei798) - Original model: [aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored](https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored) ## Description [MaziyarPanahi/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF](https://huggingface.co/MaziyarPanahi/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF) contains GGUF format model files for [aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored](https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
karsar/paraphrase-multilingual-MiniLM-L12-hu_v1
karsar
2024-10-28T18:42:04Z
255
0
null
[ "safetensors", "bert", "mteb", "arxiv:1908.10084", "arxiv:1705.00652", "model-index", "region:us" ]
null
2024-10-23T23:27:18Z
--- model-index: - name: karsar/paraphrase-multilingual-MiniLM-L12-hu_v1 results: - dataset: config: hun_Latn-hun_Latn name: MTEB BelebeleRetrieval (hun_Latn-hun_Latn) revision: 75b399394a9803252cfec289d103de462763db7c split: test type: facebook/belebele metrics: - type: main_score value: 77.865 - type: map_at_1 value: 67.333 - type: map_at_10 value: 74.404 - type: map_at_100 value: 74.802 - type: map_at_1000 value: 74.809 - type: map_at_20 value: 74.63 - type: map_at_3 value: 72.796 - type: map_at_5 value: 73.67399999999999 - type: mrr_at_1 value: 67.33333333333333 - type: mrr_at_10 value: 74.40396825396829 - type: mrr_at_100 value: 74.80177264047548 - type: mrr_at_1000 value: 74.80937346439818 - type: mrr_at_20 value: 74.62979204843244 - type: mrr_at_3 value: 72.7962962962963 - type: mrr_at_5 value: 73.6740740740741 - type: nauc_map_at_1000_diff1 value: 76.08133094195743 - type: nauc_map_at_1000_max value: 61.727834175182736 - type: nauc_map_at_1000_std value: -2.3231732437794568 - type: nauc_map_at_100_diff1 value: 76.07916259051902 - type: nauc_map_at_100_max value: 61.72703450852774 - type: nauc_map_at_100_std value: -2.3175338063349575 - type: nauc_map_at_10_diff1 value: 75.97996147738112 - type: nauc_map_at_10_max value: 61.860784493617224 - type: nauc_map_at_10_std value: -2.4887315051072356 - type: nauc_map_at_1_diff1 value: 78.13561632940586 - type: nauc_map_at_1_max value: 59.243520843511746 - type: nauc_map_at_1_std value: -2.6689239089679515 - type: nauc_map_at_20_diff1 value: 76.06883452011327 - type: nauc_map_at_20_max value: 61.775589074510826 - type: nauc_map_at_20_std value: -2.3905575770447585 - type: nauc_map_at_3_diff1 value: 75.85937006372846 - type: nauc_map_at_3_max value: 61.819093557650895 - type: nauc_map_at_3_std value: -2.5207238945764647 - type: nauc_map_at_5_diff1 value: 76.06929563357589 - type: nauc_map_at_5_max value: 61.93563829360039 - type: nauc_map_at_5_std value: -1.9424637593671918 - type: nauc_mrr_at_1000_diff1 value: 76.08133094195743 - type: nauc_mrr_at_1000_max value: 61.727834175182736 - type: nauc_mrr_at_1000_std value: -2.3231732437794568 - type: nauc_mrr_at_100_diff1 value: 76.07916259051902 - type: nauc_mrr_at_100_max value: 61.72703450852774 - type: nauc_mrr_at_100_std value: -2.3175338063349575 - type: nauc_mrr_at_10_diff1 value: 75.97996147738112 - type: nauc_mrr_at_10_max value: 61.860784493617224 - type: nauc_mrr_at_10_std value: -2.4887315051072356 - type: nauc_mrr_at_1_diff1 value: 78.13561632940586 - type: nauc_mrr_at_1_max value: 59.243520843511746 - type: nauc_mrr_at_1_std value: -2.6689239089679515 - type: nauc_mrr_at_20_diff1 value: 76.06883452011327 - type: nauc_mrr_at_20_max value: 61.775589074510826 - type: nauc_mrr_at_20_std value: -2.3905575770447585 - type: nauc_mrr_at_3_diff1 value: 75.85937006372846 - type: nauc_mrr_at_3_max value: 61.819093557650895 - type: nauc_mrr_at_3_std value: -2.5207238945764647 - type: nauc_mrr_at_5_diff1 value: 76.06929563357589 - type: nauc_mrr_at_5_max value: 61.93563829360039 - type: nauc_mrr_at_5_std value: -1.9424637593671918 - type: nauc_ndcg_at_1000_diff1 value: 75.7057240434196 - type: nauc_ndcg_at_1000_max value: 62.021717989510385 - type: nauc_ndcg_at_1000_std value: -2.2522490330905103 - type: nauc_ndcg_at_100_diff1 value: 75.62156032414751 - type: nauc_ndcg_at_100_max value: 61.97932968109654 - type: nauc_ndcg_at_100_std value: -2.0118635701265375 - type: nauc_ndcg_at_10_diff1 value: 75.09836101324169 - type: nauc_ndcg_at_10_max value: 62.703427209156736 - type: nauc_ndcg_at_10_std value: -2.9287738405282395 - type: nauc_ndcg_at_1_diff1 value: 78.13561632940586 - type: nauc_ndcg_at_1_max value: 59.243520843511746 - type: nauc_ndcg_at_1_std value: -2.6689239089679515 - type: nauc_ndcg_at_20_diff1 value: 75.46348763248093 - type: nauc_ndcg_at_20_max value: 62.35498579351012 - type: nauc_ndcg_at_20_std value: -2.577338920595739 - type: nauc_ndcg_at_3_diff1 value: 74.92773626606146 - type: nauc_ndcg_at_3_max value: 62.55812080913172 - type: nauc_ndcg_at_3_std value: -2.5630879822636476 - type: nauc_ndcg_at_5_diff1 value: 75.3100398038724 - type: nauc_ndcg_at_5_max value: 62.81733471459409 - type: nauc_ndcg_at_5_std value: -1.501748019065971 - type: nauc_precision_at_1000_diff1 value: .nan - type: nauc_precision_at_1000_max value: .nan - type: nauc_precision_at_1000_std value: .nan - type: nauc_precision_at_100_diff1 value: 66.63165266106552 - type: nauc_precision_at_100_max value: 57.60582010582053 - type: nauc_precision_at_100_std value: 23.844537815126937 - type: nauc_precision_at_10_diff1 value: 70.08984254109942 - type: nauc_precision_at_10_max value: 67.45880653843606 - type: nauc_precision_at_10_std value: -6.3555626412584 - type: nauc_precision_at_1_diff1 value: 78.13561632940586 - type: nauc_precision_at_1_max value: 59.243520843511746 - type: nauc_precision_at_1_std value: -2.6689239089679515 - type: nauc_precision_at_20_diff1 value: 71.63306637208878 - type: nauc_precision_at_20_max value: 65.99137307505141 - type: nauc_precision_at_20_std value: -4.675767020423249 - type: nauc_precision_at_3_diff1 value: 71.57608769475272 - type: nauc_precision_at_3_max value: 65.10683383365713 - type: nauc_precision_at_3_std value: -2.7514636167292985 - type: nauc_precision_at_5_diff1 value: 72.21412151067312 - type: nauc_precision_at_5_max value: 66.43448275862069 - type: nauc_precision_at_5_std value: 0.4555008210180189 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 66.63165266106327 - type: nauc_recall_at_100_max value: 57.60582010581922 - type: nauc_recall_at_100_std value: 23.844537815125907 - type: nauc_recall_at_10_diff1 value: 70.08984254109967 - type: nauc_recall_at_10_max value: 67.45880653843632 - type: nauc_recall_at_10_std value: -6.355562641258283 - type: nauc_recall_at_1_diff1 value: 78.13561632940586 - type: nauc_recall_at_1_max value: 59.243520843511746 - type: nauc_recall_at_1_std value: -2.6689239089679515 - type: nauc_recall_at_20_diff1 value: 71.6330663720887 - type: nauc_recall_at_20_max value: 65.9913730750516 - type: nauc_recall_at_20_std value: -4.675767020422999 - type: nauc_recall_at_3_diff1 value: 71.57608769475274 - type: nauc_recall_at_3_max value: 65.106833833657 - type: nauc_recall_at_3_std value: -2.7514636167294 - type: nauc_recall_at_5_diff1 value: 72.21412151067315 - type: nauc_recall_at_5_max value: 66.43448275862077 - type: nauc_recall_at_5_std value: 0.4555008210180812 - type: ndcg_at_1 value: 67.333 - type: ndcg_at_10 value: 77.865 - type: ndcg_at_100 value: 79.927 - type: ndcg_at_1000 value: 80.104 - type: ndcg_at_20 value: 78.701 - type: ndcg_at_3 value: 74.509 - type: ndcg_at_5 value: 76.101 - type: precision_at_1 value: 67.333 - type: precision_at_10 value: 8.878 - type: precision_at_100 value: 0.987 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.606 - type: precision_at_3 value: 26.480999999999998 - type: precision_at_5 value: 16.667 - type: recall_at_1 value: 67.333 - type: recall_at_10 value: 88.778 - type: recall_at_100 value: 98.667 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 92.111 - type: recall_at_3 value: 79.444 - type: recall_at_5 value: 83.333 task: type: Retrieval - dataset: config: hun_Latn-eng_Latn name: MTEB BelebeleRetrieval (hun_Latn-eng_Latn) revision: 75b399394a9803252cfec289d103de462763db7c split: test type: facebook/belebele metrics: - type: main_score value: 71.307 - type: map_at_1 value: 57.778 - type: map_at_10 value: 66.843 - type: map_at_100 value: 67.368 - type: map_at_1000 value: 67.38300000000001 - type: map_at_20 value: 67.162 - type: map_at_3 value: 64.704 - type: map_at_5 value: 65.97 - type: mrr_at_1 value: 57.77777777777777 - type: mrr_at_10 value: 66.8428130511464 - type: mrr_at_100 value: 67.36803803097415 - type: mrr_at_1000 value: 67.38317813286176 - type: mrr_at_20 value: 67.16164827986293 - type: mrr_at_3 value: 64.7037037037037 - type: mrr_at_5 value: 65.97037037037038 - type: nauc_map_at_1000_diff1 value: 69.02219987684592 - type: nauc_map_at_1000_max value: 60.114123597785785 - type: nauc_map_at_1000_std value: 4.880216382742553 - type: nauc_map_at_100_diff1 value: 69.01116363727591 - type: nauc_map_at_100_max value: 60.11716622079215 - type: nauc_map_at_100_std value: 4.890393343425179 - type: nauc_map_at_10_diff1 value: 68.95240309900163 - type: nauc_map_at_10_max value: 60.124170478386105 - type: nauc_map_at_10_std value: 4.819161459028938 - type: nauc_map_at_1_diff1 value: 72.45335820895522 - type: nauc_map_at_1_max value: 59.127316006176 - type: nauc_map_at_1_std value: 6.580191713844538 - type: nauc_map_at_20_diff1 value: 68.87249492072671 - type: nauc_map_at_20_max value: 60.04834608184139 - type: nauc_map_at_20_std value: 4.807958211395879 - type: nauc_map_at_3_diff1 value: 69.38092756897547 - type: nauc_map_at_3_max value: 60.30271451423346 - type: nauc_map_at_3_std value: 3.9374045068220322 - type: nauc_map_at_5_diff1 value: 69.10875854889262 - type: nauc_map_at_5_max value: 60.24557626138646 - type: nauc_map_at_5_std value: 4.271289591515184 - type: nauc_mrr_at_1000_diff1 value: 69.02219987684592 - type: nauc_mrr_at_1000_max value: 60.114123597785785 - type: nauc_mrr_at_1000_std value: 4.880216382742553 - type: nauc_mrr_at_100_diff1 value: 69.01116363727591 - type: nauc_mrr_at_100_max value: 60.11716622079215 - type: nauc_mrr_at_100_std value: 4.890393343425179 - type: nauc_mrr_at_10_diff1 value: 68.95240309900163 - type: nauc_mrr_at_10_max value: 60.124170478386105 - type: nauc_mrr_at_10_std value: 4.819161459028938 - type: nauc_mrr_at_1_diff1 value: 72.45335820895522 - type: nauc_mrr_at_1_max value: 59.127316006176 - type: nauc_mrr_at_1_std value: 6.580191713844538 - type: nauc_mrr_at_20_diff1 value: 68.87249492072671 - type: nauc_mrr_at_20_max value: 60.04834608184139 - type: nauc_mrr_at_20_std value: 4.807958211395879 - type: nauc_mrr_at_3_diff1 value: 69.38092756897547 - type: nauc_mrr_at_3_max value: 60.30271451423346 - type: nauc_mrr_at_3_std value: 3.9374045068220322 - type: nauc_mrr_at_5_diff1 value: 69.10875854889262 - type: nauc_mrr_at_5_max value: 60.24557626138646 - type: nauc_mrr_at_5_std value: 4.271289591515184 - type: nauc_ndcg_at_1000_diff1 value: 68.36151731152576 - type: nauc_ndcg_at_1000_max value: 60.21499073164881 - type: nauc_ndcg_at_1000_std value: 5.019374170320369 - type: nauc_ndcg_at_100_diff1 value: 68.12777182930174 - type: nauc_ndcg_at_100_max value: 60.293069076013296 - type: nauc_ndcg_at_100_std value: 5.375522795479381 - type: nauc_ndcg_at_10_diff1 value: 67.46914440211127 - type: nauc_ndcg_at_10_max value: 60.210209508170976 - type: nauc_ndcg_at_10_std value: 4.921793458534013 - type: nauc_ndcg_at_1_diff1 value: 72.45335820895522 - type: nauc_ndcg_at_1_max value: 59.127316006176 - type: nauc_ndcg_at_1_std value: 6.580191713844538 - type: nauc_ndcg_at_20_diff1 value: 67.09692054164125 - type: nauc_ndcg_at_20_max value: 59.89689460185056 - type: nauc_ndcg_at_20_std value: 4.977631579372532 - type: nauc_ndcg_at_3_diff1 value: 68.54468748113734 - type: nauc_ndcg_at_3_max value: 60.66886257099051 - type: nauc_ndcg_at_3_std value: 3.073807310026356 - type: nauc_ndcg_at_5_diff1 value: 67.94441056262235 - type: nauc_ndcg_at_5_max value: 60.47774252804478 - type: nauc_ndcg_at_5_std value: 3.572034464519458 - type: nauc_precision_at_1000_diff1 value: .nan - type: nauc_precision_at_1000_max value: .nan - type: nauc_precision_at_1000_std value: .nan - type: nauc_precision_at_100_diff1 value: 52.808123249299676 - type: nauc_precision_at_100_max value: 65.81699346405254 - type: nauc_precision_at_100_std value: 31.809056956116383 - type: nauc_precision_at_10_diff1 value: 59.02820830750145 - type: nauc_precision_at_10_max value: 60.33787972721626 - type: nauc_precision_at_10_std value: 6.405175213296739 - type: nauc_precision_at_1_diff1 value: 72.45335820895522 - type: nauc_precision_at_1_max value: 59.127316006176 - type: nauc_precision_at_1_std value: 6.580191713844538 - type: nauc_precision_at_20_diff1 value: 52.242994576107485 - type: nauc_precision_at_20_max value: 57.56617253643015 - type: nauc_precision_at_20_std value: 7.9884388212213455 - type: nauc_precision_at_3_diff1 value: 65.73191064426206 - type: nauc_precision_at_3_max value: 61.92373010829596 - type: nauc_precision_at_3_std value: 0.096317142458587 - type: nauc_precision_at_5_diff1 value: 63.20464039592358 - type: nauc_precision_at_5_max value: 61.25721735891223 - type: nauc_precision_at_5_std value: 0.7937099220392029 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 52.80812324929921 - type: nauc_recall_at_100_max value: 65.81699346405242 - type: nauc_recall_at_100_std value: 31.809056956115235 - type: nauc_recall_at_10_diff1 value: 59.02820830750159 - type: nauc_recall_at_10_max value: 60.337879727216446 - type: nauc_recall_at_10_std value: 6.405175213296646 - type: nauc_recall_at_1_diff1 value: 72.45335820895522 - type: nauc_recall_at_1_max value: 59.127316006176 - type: nauc_recall_at_1_std value: 6.580191713844538 - type: nauc_recall_at_20_diff1 value: 52.242994576107534 - type: nauc_recall_at_20_max value: 57.56617253643034 - type: nauc_recall_at_20_std value: 7.988438821221468 - type: nauc_recall_at_3_diff1 value: 65.73191064426209 - type: nauc_recall_at_3_max value: 61.923730108295906 - type: nauc_recall_at_3_std value: 0.09631714245861488 - type: nauc_recall_at_5_diff1 value: 63.204640395923626 - type: nauc_recall_at_5_max value: 61.25721735891235 - type: nauc_recall_at_5_std value: 0.7937099220392697 - type: ndcg_at_1 value: 57.778 - type: ndcg_at_10 value: 71.307 - type: ndcg_at_100 value: 73.942 - type: ndcg_at_1000 value: 74.248 - type: ndcg_at_20 value: 72.499 - type: ndcg_at_3 value: 66.95 - type: ndcg_at_5 value: 69.21199999999999 - type: precision_at_1 value: 57.778 - type: precision_at_10 value: 8.533 - type: precision_at_100 value: 0.9780000000000001 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.506 - type: precision_at_3 value: 24.481 - type: precision_at_5 value: 15.778 - type: recall_at_1 value: 57.778 - type: recall_at_10 value: 85.333 - type: recall_at_100 value: 97.77799999999999 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 90.11099999999999 - type: recall_at_3 value: 73.444 - type: recall_at_5 value: 78.889 task: type: Retrieval - dataset: config: eng_Latn-hun_Latn name: MTEB BelebeleRetrieval (eng_Latn-hun_Latn) revision: 75b399394a9803252cfec289d103de462763db7c split: test type: facebook/belebele metrics: - type: main_score value: 73.668 - type: map_at_1 value: 60.778 - type: map_at_10 value: 69.571 - type: map_at_100 value: 70.114 - type: map_at_1000 value: 70.124 - type: map_at_20 value: 69.93700000000001 - type: map_at_3 value: 67.778 - type: map_at_5 value: 68.872 - type: mrr_at_1 value: 60.77777777777777 - type: mrr_at_10 value: 69.57142857142857 - type: mrr_at_100 value: 70.1136336675579 - type: mrr_at_1000 value: 70.12432347462514 - type: mrr_at_20 value: 69.93690215204663 - type: mrr_at_3 value: 67.77777777777779 - type: mrr_at_5 value: 68.87222222222223 - type: nauc_map_at_1000_diff1 value: 70.84789011327231 - type: nauc_map_at_1000_max value: 60.852088181225824 - type: nauc_map_at_1000_std value: 6.549993568212846 - type: nauc_map_at_100_diff1 value: 70.84603146007751 - type: nauc_map_at_100_max value: 60.859417397516125 - type: nauc_map_at_100_std value: 6.577244018939677 - type: nauc_map_at_10_diff1 value: 70.71490936568583 - type: nauc_map_at_10_max value: 60.94472236517367 - type: nauc_map_at_10_std value: 6.53657697773106 - type: nauc_map_at_1_diff1 value: 74.59301032751448 - type: nauc_map_at_1_max value: 59.251209223705935 - type: nauc_map_at_1_std value: 6.536579330592454 - type: nauc_map_at_20_diff1 value: 70.69902333418673 - type: nauc_map_at_20_max value: 60.84819592450007 - type: nauc_map_at_20_std value: 6.487171209675751 - type: nauc_map_at_3_diff1 value: 70.94073456299253 - type: nauc_map_at_3_max value: 61.117845574972286 - type: nauc_map_at_3_std value: 5.824524654602759 - type: nauc_map_at_5_diff1 value: 70.64337838638826 - type: nauc_map_at_5_max value: 60.69375707294804 - type: nauc_map_at_5_std value: 6.1403804587682025 - type: nauc_mrr_at_1000_diff1 value: 70.84789011327231 - type: nauc_mrr_at_1000_max value: 60.852088181225824 - type: nauc_mrr_at_1000_std value: 6.549993568212846 - type: nauc_mrr_at_100_diff1 value: 70.84603146007751 - type: nauc_mrr_at_100_max value: 60.859417397516125 - type: nauc_mrr_at_100_std value: 6.577244018939677 - type: nauc_mrr_at_10_diff1 value: 70.71490936568583 - type: nauc_mrr_at_10_max value: 60.94472236517367 - type: nauc_mrr_at_10_std value: 6.53657697773106 - type: nauc_mrr_at_1_diff1 value: 74.59301032751448 - type: nauc_mrr_at_1_max value: 59.251209223705935 - type: nauc_mrr_at_1_std value: 6.536579330592454 - type: nauc_mrr_at_20_diff1 value: 70.69902333418673 - type: nauc_mrr_at_20_max value: 60.84819592450007 - type: nauc_mrr_at_20_std value: 6.487171209675751 - type: nauc_mrr_at_3_diff1 value: 70.94073456299253 - type: nauc_mrr_at_3_max value: 61.117845574972286 - type: nauc_mrr_at_3_std value: 5.824524654602759 - type: nauc_mrr_at_5_diff1 value: 70.64337838638826 - type: nauc_mrr_at_5_max value: 60.69375707294804 - type: nauc_mrr_at_5_std value: 6.1403804587682025 - type: nauc_ndcg_at_1000_diff1 value: 70.2568421673153 - type: nauc_ndcg_at_1000_max value: 61.154155762479746 - type: nauc_ndcg_at_1000_std value: 6.987492117976732 - type: nauc_ndcg_at_100_diff1 value: 70.23106290886678 - type: nauc_ndcg_at_100_max value: 61.387176821366296 - type: nauc_ndcg_at_100_std value: 7.782749694416603 - type: nauc_ndcg_at_10_diff1 value: 69.26227190907855 - type: nauc_ndcg_at_10_max value: 61.634434826859874 - type: nauc_ndcg_at_10_std value: 7.185316156791736 - type: nauc_ndcg_at_1_diff1 value: 74.59301032751448 - type: nauc_ndcg_at_1_max value: 59.251209223705935 - type: nauc_ndcg_at_1_std value: 6.536579330592454 - type: nauc_ndcg_at_20_diff1 value: 69.1954116973286 - type: nauc_ndcg_at_20_max value: 61.38887961478062 - type: nauc_ndcg_at_20_std value: 7.1318041010309585 - type: nauc_ndcg_at_3_diff1 value: 69.75775816678905 - type: nauc_ndcg_at_3_max value: 61.67436817540673 - type: nauc_ndcg_at_3_std value: 5.650531149732009 - type: nauc_ndcg_at_5_diff1 value: 69.1651947412561 - type: nauc_ndcg_at_5_max value: 60.97882565960433 - type: nauc_ndcg_at_5_std value: 6.203128058155249 - type: nauc_precision_at_1000_diff1 value: .nan - type: nauc_precision_at_1000_max value: .nan - type: nauc_precision_at_1000_std value: .nan - type: nauc_precision_at_100_diff1 value: 68.65491294557121 - type: nauc_precision_at_100_max value: 80.36744109408565 - type: nauc_precision_at_100_std value: 70.92327126929257 - type: nauc_precision_at_10_diff1 value: 61.29162638094176 - type: nauc_precision_at_10_max value: 65.7264903076506 - type: nauc_precision_at_10_std value: 11.47548778748128 - type: nauc_precision_at_1_diff1 value: 74.59301032751448 - type: nauc_precision_at_1_max value: 59.251209223705935 - type: nauc_precision_at_1_std value: 6.536579330592454 - type: nauc_precision_at_20_diff1 value: 56.51478369125409 - type: nauc_precision_at_20_max value: 66.28882664176771 - type: nauc_precision_at_20_std value: 14.05415499533146 - type: nauc_precision_at_3_diff1 value: 65.55150000975934 - type: nauc_precision_at_3_max value: 63.631594870493636 - type: nauc_precision_at_3_std value: 5.057287295297996 - type: nauc_precision_at_5_diff1 value: 62.93787770906014 - type: nauc_precision_at_5_max value: 62.06285784899278 - type: nauc_precision_at_5_std value: 6.577948558011871 - type: nauc_recall_at_1000_diff1 value: .nan - type: nauc_recall_at_1000_max value: .nan - type: nauc_recall_at_1000_std value: .nan - type: nauc_recall_at_100_diff1 value: 68.6549129455701 - type: nauc_recall_at_100_max value: 80.36744109408454 - type: nauc_recall_at_100_std value: 70.92327126929207 - type: nauc_recall_at_10_diff1 value: 61.29162638094184 - type: nauc_recall_at_10_max value: 65.72649030765079 - type: nauc_recall_at_10_std value: 11.475487787481537 - type: nauc_recall_at_1_diff1 value: 74.59301032751448 - type: nauc_recall_at_1_max value: 59.251209223705935 - type: nauc_recall_at_1_std value: 6.536579330592454 - type: nauc_recall_at_20_diff1 value: 56.514783691254266 - type: nauc_recall_at_20_max value: 66.28882664176774 - type: nauc_recall_at_20_std value: 14.054154995331741 - type: nauc_recall_at_3_diff1 value: 65.55150000975928 - type: nauc_recall_at_3_max value: 63.63159487049364 - type: nauc_recall_at_3_std value: 5.05728729529798 - type: nauc_recall_at_5_diff1 value: 62.937877709060295 - type: nauc_recall_at_5_max value: 62.06285784899285 - type: nauc_recall_at_5_std value: 6.577948558011953 - type: ndcg_at_1 value: 60.778 - type: ndcg_at_10 value: 73.668 - type: ndcg_at_100 value: 76.21 - type: ndcg_at_1000 value: 76.459 - type: ndcg_at_20 value: 74.993 - type: ndcg_at_3 value: 70.00800000000001 - type: ndcg_at_5 value: 71.978 - type: precision_at_1 value: 60.778 - type: precision_at_10 value: 8.644 - type: precision_at_100 value: 0.9809999999999999 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.583 - type: precision_at_3 value: 25.480999999999998 - type: precision_at_5 value: 16.244 - type: recall_at_1 value: 60.778 - type: recall_at_10 value: 86.444 - type: recall_at_100 value: 98.111 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 91.667 - type: recall_at_3 value: 76.444 - type: recall_at_5 value: 81.22200000000001 task: type: Retrieval - dataset: config: eng_Latn-hun_Latn name: MTEB BibleNLPBitextMining (eng_Latn-hun_Latn) revision: 264a18480c529d9e922483839b4b9758e690b762 split: train type: davidstap/biblenlp-corpus-mmteb metrics: - type: accuracy value: 88.671875 - type: f1 value: 85.859375 - type: main_score value: 85.859375 - type: precision value: 84.71354166666667 - type: recall value: 88.671875 task: type: BitextMining - dataset: config: hun_Latn-eng_Latn name: MTEB BibleNLPBitextMining (hun_Latn-eng_Latn) revision: 264a18480c529d9e922483839b4b9758e690b762 split: train type: davidstap/biblenlp-corpus-mmteb metrics: - type: accuracy value: 91.796875 - type: f1 value: 89.41406249999999 - type: main_score value: 89.41406249999999 - type: precision value: 88.31380208333334 - type: recall value: 91.796875 task: type: BitextMining - dataset: config: default name: MTEB HunSum2AbstractiveRetrieval (default) revision: 24e1445c8180d937f0a16f8ae8a62e77cc952e56 split: test type: SZTAKI-HLT/HunSum-2-abstractive metrics: - type: main_score value: 63.263000000000005 - type: map_at_1 value: 63.263000000000005 - type: map_at_10 value: 69.717 - type: map_at_100 value: 70.19999999999999 - type: map_at_1000 value: 70.223 - type: map_at_20 value: 69.987 - type: map_at_3 value: 68.126 - type: map_at_5 value: 69.11500000000001 - type: mrr_at_1 value: 63.263263263263255 - type: mrr_at_10 value: 69.71656179989505 - type: mrr_at_100 value: 70.20005091433352 - type: mrr_at_1000 value: 70.22300238535382 - type: mrr_at_20 value: 69.98650484718584 - type: mrr_at_3 value: 68.12645979312641 - type: mrr_at_5 value: 69.11494828161491 - type: nauc_map_at_1000_diff1 value: 78.57062147162597 - type: nauc_map_at_1000_max value: 67.50701502337495 - type: nauc_map_at_1000_std value: -0.5617129044803558 - type: nauc_map_at_100_diff1 value: 78.55994402867587 - type: nauc_map_at_100_max value: 67.50751346612932 - type: nauc_map_at_100_std value: -0.5527533150571393 - type: nauc_map_at_10_diff1 value: 78.40366721771652 - type: nauc_map_at_10_max value: 67.49241622659412 - type: nauc_map_at_10_std value: -0.48552097268197614 - type: nauc_map_at_1_diff1 value: 82.01486923813978 - type: nauc_map_at_1_max value: 65.96265600324601 - type: nauc_map_at_1_std value: -3.3920974069100702 - type: nauc_map_at_20_diff1 value: 78.47160921094391 - type: nauc_map_at_20_max value: 67.53010937556571 - type: nauc_map_at_20_std value: -0.5304810036230149 - type: nauc_map_at_3_diff1 value: 78.82728109994231 - type: nauc_map_at_3_max value: 67.67886259360823 - type: nauc_map_at_3_std value: -0.8390404611287001 - type: nauc_map_at_5_diff1 value: 78.64851152021848 - type: nauc_map_at_5_max value: 67.56443643847581 - type: nauc_map_at_5_std value: -0.5438994708241538 - type: nauc_mrr_at_1000_diff1 value: 78.57062147162597 - type: nauc_mrr_at_1000_max value: 67.50701502337495 - type: nauc_mrr_at_1000_std value: -0.5617129044803558 - type: nauc_mrr_at_100_diff1 value: 78.55994402867587 - type: nauc_mrr_at_100_max value: 67.50751346612932 - type: nauc_mrr_at_100_std value: -0.5527533150571393 - type: nauc_mrr_at_10_diff1 value: 78.40366721771652 - type: nauc_mrr_at_10_max value: 67.49241622659412 - type: nauc_mrr_at_10_std value: -0.48552097268197614 - type: nauc_mrr_at_1_diff1 value: 82.01486923813978 - type: nauc_mrr_at_1_max value: 65.96265600324601 - type: nauc_mrr_at_1_std value: -3.3920974069100702 - type: nauc_mrr_at_20_diff1 value: 78.47160921094391 - type: nauc_mrr_at_20_max value: 67.53010937556571 - type: nauc_mrr_at_20_std value: -0.5304810036230149 - type: nauc_mrr_at_3_diff1 value: 78.82728109994231 - type: nauc_mrr_at_3_max value: 67.67886259360823 - type: nauc_mrr_at_3_std value: -0.8390404611287001 - type: nauc_mrr_at_5_diff1 value: 78.64851152021848 - type: nauc_mrr_at_5_max value: 67.56443643847581 - type: nauc_mrr_at_5_std value: -0.5438994708241538 - type: nauc_ndcg_at_1000_diff1 value: 77.85313935589254 - type: nauc_ndcg_at_1000_max value: 67.79745016701565 - type: nauc_ndcg_at_1000_std value: 0.3743893992928968 - type: nauc_ndcg_at_100_diff1 value: 77.54895730138853 - type: nauc_ndcg_at_100_max value: 67.90017248869928 - type: nauc_ndcg_at_100_std value: 0.859162358234398 - type: nauc_ndcg_at_10_diff1 value: 76.71113405671676 - type: nauc_ndcg_at_10_max value: 67.96034182778398 - type: nauc_ndcg_at_10_std value: 1.1822837192182254 - type: nauc_ndcg_at_1_diff1 value: 82.01486923813978 - type: nauc_ndcg_at_1_max value: 65.96265600324601 - type: nauc_ndcg_at_1_std value: -3.3920974069100702 - type: nauc_ndcg_at_20_diff1 value: 76.93959621702203 - type: nauc_ndcg_at_20_max value: 68.11195662698223 - type: nauc_ndcg_at_20_std value: 1.04309687394849 - type: nauc_ndcg_at_3_diff1 value: 77.79565059957739 - type: nauc_ndcg_at_3_max value: 68.28729385816999 - type: nauc_ndcg_at_3_std value: 0.2325515867720005 - type: nauc_ndcg_at_5_diff1 value: 77.37740780039985 - type: nauc_ndcg_at_5_max value: 68.0591693716456 - type: nauc_ndcg_at_5_std value: 0.8419316054801026 - type: nauc_precision_at_1000_diff1 value: 70.06119288295852 - type: nauc_precision_at_1000_max value: 56.300969751588504 - type: nauc_precision_at_1000_std value: 42.8131104675957 - type: nauc_precision_at_100_diff1 value: 67.53252742986358 - type: nauc_precision_at_100_max value: 71.63984328411749 - type: nauc_precision_at_100_std value: 20.467710864542678 - type: nauc_precision_at_10_diff1 value: 68.62375685620702 - type: nauc_precision_at_10_max value: 70.02532507228068 - type: nauc_precision_at_10_std value: 9.35439782317633 - type: nauc_precision_at_1_diff1 value: 82.01486923813978 - type: nauc_precision_at_1_max value: 65.96265600324601 - type: nauc_precision_at_1_std value: -3.3920974069100702 - type: nauc_precision_at_20_diff1 value: 67.96187481073133 - type: nauc_precision_at_20_max value: 71.59854027319963 - type: nauc_precision_at_20_std value: 10.641909874113086 - type: nauc_precision_at_3_diff1 value: 74.38802810704372 - type: nauc_precision_at_3_max value: 70.31804260818862 - type: nauc_precision_at_3_std value: 3.8694413447531946 - type: nauc_precision_at_5_diff1 value: 72.53680275396366 - type: nauc_precision_at_5_max value: 69.84127154759457 - type: nauc_precision_at_5_std value: 6.232801743816592 - type: nauc_recall_at_1000_diff1 value: 70.06119288296337 - type: nauc_recall_at_1000_max value: 56.30096975158339 - type: nauc_recall_at_1000_std value: 42.81311046760523 - type: nauc_recall_at_100_diff1 value: 67.53252742986345 - type: nauc_recall_at_100_max value: 71.63984328411706 - type: nauc_recall_at_100_std value: 20.46771086454334 - type: nauc_recall_at_10_diff1 value: 68.62375685620707 - type: nauc_recall_at_10_max value: 70.02532507228068 - type: nauc_recall_at_10_std value: 9.354397823176459 - type: nauc_recall_at_1_diff1 value: 82.01486923813978 - type: nauc_recall_at_1_max value: 65.96265600324601 - type: nauc_recall_at_1_std value: -3.3920974069100702 - type: nauc_recall_at_20_diff1 value: 67.96187481073152 - type: nauc_recall_at_20_max value: 71.59854027319979 - type: nauc_recall_at_20_std value: 10.641909874113258 - type: nauc_recall_at_3_diff1 value: 74.3880281070437 - type: nauc_recall_at_3_max value: 70.31804260818865 - type: nauc_recall_at_3_std value: 3.8694413447530995 - type: nauc_recall_at_5_diff1 value: 72.53680275396374 - type: nauc_recall_at_5_max value: 69.84127154759464 - type: nauc_recall_at_5_std value: 6.232801743816686 - type: ndcg_at_1 value: 63.263000000000005 - type: ndcg_at_10 value: 72.89099999999999 - type: ndcg_at_100 value: 75.421 - type: ndcg_at_1000 value: 76.027 - type: ndcg_at_20 value: 73.919 - type: ndcg_at_3 value: 69.646 - type: ndcg_at_5 value: 71.434 - type: precision_at_1 value: 63.263000000000005 - type: precision_at_10 value: 8.288 - type: precision_at_100 value: 0.95 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.352 - type: precision_at_3 value: 24.675 - type: precision_at_5 value: 15.676000000000002 - type: recall_at_1 value: 63.263000000000005 - type: recall_at_10 value: 82.883 - type: recall_at_100 value: 95.045 - type: recall_at_1000 value: 99.8 - type: recall_at_20 value: 87.03699999999999 - type: recall_at_3 value: 74.024 - type: recall_at_5 value: 78.378 task: type: Retrieval - dataset: config: hu name: MTEB MassiveIntentClassification (hu) revision: 4672e20407010da34463acc759c162ca9734bca6 split: test type: mteb/amazon_massive_intent metrics: - type: accuracy value: 60.08406186953599 - type: f1 value: 56.958742875652455 - type: f1_weighted value: 60.57068245324919 - type: main_score value: 60.08406186953599 task: type: Classification - dataset: config: hu name: MTEB MassiveIntentClassification (hu) revision: 4672e20407010da34463acc759c162ca9734bca6 split: validation type: mteb/amazon_massive_intent metrics: - type: accuracy value: 60.201672405312344 - type: f1 value: 57.03816512332761 - type: f1_weighted value: 60.53109947438201 - type: main_score value: 60.201672405312344 task: type: Classification - dataset: config: hu name: MTEB MassiveScenarioClassification (hu) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: test type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 66.61398789509079 - type: f1 value: 65.88647044935249 - type: f1_weighted value: 66.80145146976484 - type: main_score value: 66.61398789509079 task: type: Classification - dataset: config: hu name: MTEB MassiveScenarioClassification (hu) revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 split: validation type: mteb/amazon_massive_scenario metrics: - type: accuracy value: 66.11411706837187 - type: f1 value: 65.76717397996951 - type: f1_weighted value: 66.29902597756885 - type: main_score value: 66.11411706837187 task: type: Classification - dataset: config: hu name: MTEB MultiEURLEXMultilabelClassification (hu) revision: 2aea5a6dc8fdcfeca41d0fb963c0a338930bde5c split: test type: mteb/eurlex-multilingual metrics: - type: accuracy value: 3.0839999999999996 - type: f1 value: 27.860225486785566 - type: lrap value: 43.02579150793552 - type: main_score value: 3.0839999999999996 task: type: MultilabelClassification - dataset: config: arb_Arab-hun_Latn name: MTEB NTREXBitextMining (arb_Arab-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 85.678517776665 - type: f1 value: 81.92049979731502 - type: main_score value: 81.92049979731502 - type: precision value: 80.21115005842097 - type: recall value: 85.678517776665 task: type: BitextMining - dataset: config: ben_Beng-hun_Latn name: MTEB NTREXBitextMining (ben_Beng-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 44.566850275413124 - type: f1 value: 39.07033025889276 - type: main_score value: 39.07033025889276 - type: precision value: 37.07348327291399 - type: recall value: 44.566850275413124 task: type: BitextMining - dataset: config: deu_Latn-hun_Latn name: MTEB NTREXBitextMining (deu_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.44016024036054 - type: f1 value: 91.61909530963112 - type: main_score value: 91.61909530963112 - type: precision value: 90.75279586045735 - type: recall value: 93.44016024036054 task: type: BitextMining - dataset: config: ell_Grek-hun_Latn name: MTEB NTREXBitextMining (ell_Grek-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 91.4371557336004 - type: f1 value: 89.0261582850466 - type: main_score value: 89.0261582850466 - type: precision value: 87.9043565348022 - type: recall value: 91.4371557336004 task: type: BitextMining - dataset: config: eng_Latn-hun_Latn name: MTEB NTREXBitextMining (eng_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.44166249374061 - type: f1 value: 92.8092138207311 - type: main_score value: 92.8092138207311 - type: precision value: 92.0422300116842 - type: recall value: 94.44166249374061 task: type: BitextMining - dataset: config: fas_Arab-hun_Latn name: MTEB NTREXBitextMining (fas_Arab-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 89.53430145217827 - type: f1 value: 86.72270310227245 - type: main_score value: 86.72270310227245 - type: precision value: 85.42814221331997 - type: recall value: 89.53430145217827 task: type: BitextMining - dataset: config: fin_Latn-hun_Latn name: MTEB NTREXBitextMining (fin_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 90.98647971957938 - type: f1 value: 88.44600233683859 - type: main_score value: 88.44600233683859 - type: precision value: 87.2575529961609 - type: recall value: 90.98647971957938 task: type: BitextMining - dataset: config: fra_Latn-hun_Latn name: MTEB NTREXBitextMining (fra_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 92.28843264897347 - type: f1 value: 90.12518778167251 - type: main_score value: 90.12518778167251 - type: precision value: 89.12535469871473 - type: recall value: 92.28843264897347 task: type: BitextMining - dataset: config: heb_Hebr-hun_Latn name: MTEB NTREXBitextMining (heb_Hebr-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 87.33099649474211 - type: f1 value: 83.88582874311467 - type: main_score value: 83.88582874311467 - type: precision value: 82.31263562009681 - type: recall value: 87.33099649474211 task: type: BitextMining - dataset: config: hin_Deva-hun_Latn name: MTEB NTREXBitextMining (hin_Deva-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 86.52979469203805 - type: f1 value: 83.08240137984755 - type: main_score value: 83.08240137984755 - type: precision value: 81.51352028042064 - type: recall value: 86.52979469203805 task: type: BitextMining - dataset: config: hun_Latn-arb_Arab name: MTEB NTREXBitextMining (hun_Latn-arb_Arab) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 86.73009514271406 - type: f1 value: 83.12397167179341 - type: main_score value: 83.12397167179341 - type: precision value: 81.47805040894676 - type: recall value: 86.73009514271406 task: type: BitextMining - dataset: config: hun_Latn-ben_Beng name: MTEB NTREXBitextMining (hun_Latn-ben_Beng) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 41.16174261392088 - type: f1 value: 32.73025519520262 - type: main_score value: 32.73025519520262 - type: precision value: 29.859172986363774 - type: recall value: 41.16174261392088 task: type: BitextMining - dataset: config: hun_Latn-deu_Latn name: MTEB NTREXBitextMining (hun_Latn-deu_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.39008512769153 - type: f1 value: 91.5456518110499 - type: main_score value: 91.5456518110499 - type: precision value: 90.66099148723085 - type: recall value: 93.39008512769153 task: type: BitextMining - dataset: config: hun_Latn-ell_Grek name: MTEB NTREXBitextMining (hun_Latn-ell_Grek) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 92.03805708562844 - type: f1 value: 89.81305291270239 - type: main_score value: 89.81305291270239 - type: precision value: 88.78317476214322 - type: recall value: 92.03805708562844 task: type: BitextMining - dataset: config: hun_Latn-eng_Latn name: MTEB NTREXBitextMining (hun_Latn-eng_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 94.74211316975463 - type: f1 value: 93.23985978968453 - type: main_score value: 93.23985978968453 - type: precision value: 92.51377065598398 - type: recall value: 94.74211316975463 task: type: BitextMining - dataset: config: hun_Latn-fas_Arab name: MTEB NTREXBitextMining (hun_Latn-fas_Arab) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 88.5327991987982 - type: f1 value: 85.49240527457853 - type: main_score value: 85.49240527457853 - type: precision value: 84.10413238905979 - type: recall value: 88.5327991987982 task: type: BitextMining - dataset: config: hun_Latn-fin_Latn name: MTEB NTREXBitextMining (hun_Latn-fin_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 90.23535302954431 - type: f1 value: 87.53296611584042 - type: main_score value: 87.53296611584042 - type: precision value: 86.26690035052579 - type: recall value: 90.23535302954431 task: type: BitextMining - dataset: config: hun_Latn-fra_Latn name: MTEB NTREXBitextMining (hun_Latn-fra_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 92.63895843765648 - type: f1 value: 90.47070605908863 - type: main_score value: 90.47070605908863 - type: precision value: 89.42163244867301 - type: recall value: 92.63895843765648 task: type: BitextMining - dataset: config: hun_Latn-heb_Hebr name: MTEB NTREXBitextMining (hun_Latn-heb_Hebr) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 86.62994491737606 - type: f1 value: 83.19388173168845 - type: main_score value: 83.19388173168845 - type: precision value: 81.65832081455517 - type: recall value: 86.62994491737606 task: type: BitextMining - dataset: config: hun_Latn-hin_Deva name: MTEB NTREXBitextMining (hun_Latn-hin_Deva) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 83.97596394591888 - type: f1 value: 79.85502062617736 - type: main_score value: 79.85502062617736 - type: precision value: 78.01758192844824 - type: recall value: 83.97596394591888 task: type: BitextMining - dataset: config: hun_Latn-ind_Latn name: MTEB NTREXBitextMining (hun_Latn-ind_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 92.68903355032549 - type: f1 value: 90.64596895343014 - type: main_score value: 90.64596895343014 - type: precision value: 89.68869971624103 - type: recall value: 92.68903355032549 task: type: BitextMining - dataset: config: hun_Latn-jpn_Jpan name: MTEB NTREXBitextMining (hun_Latn-jpn_Jpan) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 85.778668002003 - type: f1 value: 82.19829744616925 - type: main_score value: 82.19829744616925 - type: precision value: 80.62426973794025 - type: recall value: 85.778668002003 task: type: BitextMining - dataset: config: hun_Latn-kor_Hang name: MTEB NTREXBitextMining (hun_Latn-kor_Hang) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 84.17626439659489 - type: f1 value: 80.26746468909714 - type: main_score value: 80.26746468909714 - type: precision value: 78.5646097351155 - type: recall value: 84.17626439659489 task: type: BitextMining - dataset: config: hun_Latn-lav_Latn name: MTEB NTREXBitextMining (hun_Latn-lav_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 90.1352028042063 - type: f1 value: 87.30262059756302 - type: main_score value: 87.30262059756302 - type: precision value: 85.98731430479052 - type: recall value: 90.1352028042063 task: type: BitextMining - dataset: config: hun_Latn-lit_Latn name: MTEB NTREXBitextMining (hun_Latn-lit_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 89.58437656484726 - type: f1 value: 86.8252378567852 - type: main_score value: 86.8252378567852 - type: precision value: 85.54581872809214 - type: recall value: 89.58437656484726 task: type: BitextMining - dataset: config: hun_Latn-nld_Latn name: MTEB NTREXBitextMining (hun_Latn-nld_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.03955933900852 - type: f1 value: 91.03989317309296 - type: main_score value: 91.03989317309296 - type: precision value: 90.08930061759305 - type: recall value: 93.03955933900852 task: type: BitextMining - dataset: config: hun_Latn-pol_Latn name: MTEB NTREXBitextMining (hun_Latn-pol_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 91.58738107160741 - type: f1 value: 89.28225671841095 - type: main_score value: 89.28225671841095 - type: precision value: 88.18227341011517 - type: recall value: 91.58738107160741 task: type: BitextMining - dataset: config: hun_Latn-por_Latn name: MTEB NTREXBitextMining (hun_Latn-por_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.59038557836755 - type: f1 value: 91.71256885327992 - type: main_score value: 91.71256885327992 - type: precision value: 90.80287097312635 - type: recall value: 93.59038557836755 task: type: BitextMining - dataset: config: hun_Latn-rus_Cyrl name: MTEB NTREXBitextMining (hun_Latn-rus_Cyrl) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 91.3370055082624 - type: f1 value: 88.88916708395926 - type: main_score value: 88.88916708395926 - type: precision value: 87.75961561389704 - type: recall value: 91.3370055082624 task: type: BitextMining - dataset: config: hun_Latn-spa_Latn name: MTEB NTREXBitextMining (hun_Latn-spa_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.69053580370556 - type: f1 value: 91.94959105324652 - type: main_score value: 91.94959105324652 - type: precision value: 91.12418627941913 - type: recall value: 93.69053580370556 task: type: BitextMining - dataset: config: hun_Latn-swa_Latn name: MTEB NTREXBitextMining (hun_Latn-swa_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 35.803705558337505 - type: f1 value: 27.79832969518814 - type: main_score value: 27.79832969518814 - type: precision value: 25.370895920971037 - type: recall value: 35.803705558337505 task: type: BitextMining - dataset: config: hun_Latn-swe_Latn name: MTEB NTREXBitextMining (hun_Latn-swe_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.59038557836755 - type: f1 value: 91.66249374061091 - type: main_score value: 91.66249374061091 - type: precision value: 90.74445000834585 - type: recall value: 93.59038557836755 task: type: BitextMining - dataset: config: hun_Latn-tam_Taml name: MTEB NTREXBitextMining (hun_Latn-tam_Taml) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 27.391086629944915 - type: f1 value: 19.094552675413095 - type: main_score value: 19.094552675413095 - type: precision value: 16.88288208814635 - type: recall value: 27.391086629944915 task: type: BitextMining - dataset: config: hun_Latn-tur_Latn name: MTEB NTREXBitextMining (hun_Latn-tur_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 91.48723084626941 - type: f1 value: 89.11700884660323 - type: main_score value: 89.11700884660323 - type: precision value: 87.99031881155067 - type: recall value: 91.48723084626941 task: type: BitextMining - dataset: config: hun_Latn-vie_Latn name: MTEB NTREXBitextMining (hun_Latn-vie_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 91.13670505758637 - type: f1 value: 88.6696711734268 - type: main_score value: 88.6696711734268 - type: precision value: 87.49374061091638 - type: recall value: 91.13670505758637 task: type: BitextMining - dataset: config: hun_Latn-zho_Hant name: MTEB NTREXBitextMining (hun_Latn-zho_Hant) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 89.33400100150224 - type: f1 value: 86.55745523046474 - type: main_score value: 86.55745523046474 - type: precision value: 85.29794692038057 - type: recall value: 89.33400100150224 task: type: BitextMining - dataset: config: hun_Latn-zul_Latn name: MTEB NTREXBitextMining (hun_Latn-zul_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 16.675012518778168 - type: f1 value: 11.21636405139599 - type: main_score value: 11.21636405139599 - type: precision value: 9.903070059112947 - type: recall value: 16.675012518778168 task: type: BitextMining - dataset: config: ind_Latn-hun_Latn name: MTEB NTREXBitextMining (ind_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 92.93940911367051 - type: f1 value: 90.96478050408946 - type: main_score value: 90.96478050408946 - type: precision value: 90.03922550492406 - type: recall value: 92.93940911367051 task: type: BitextMining - dataset: config: jpn_Jpan-hun_Latn name: MTEB NTREXBitextMining (jpn_Jpan-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 88.28242363545317 - type: f1 value: 85.11433817392756 - type: main_score value: 85.11433817392756 - type: precision value: 83.67551326990485 - type: recall value: 88.28242363545317 task: type: BitextMining - dataset: config: kor_Hang-hun_Latn name: MTEB NTREXBitextMining (kor_Hang-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 85.778668002003 - type: f1 value: 81.83608746453012 - type: main_score value: 81.83608746453012 - type: precision value: 80.0233683859122 - type: recall value: 85.778668002003 task: type: BitextMining - dataset: config: lav_Latn-hun_Latn name: MTEB NTREXBitextMining (lav_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 91.73760640961443 - type: f1 value: 89.42914371557336 - type: main_score value: 89.42914371557336 - type: precision value: 88.32832582206642 - type: recall value: 91.73760640961443 task: type: BitextMining - dataset: config: lit_Latn-hun_Latn name: MTEB NTREXBitextMining (lit_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 91.78768152228342 - type: f1 value: 89.50926389584376 - type: main_score value: 89.50926389584376 - type: precision value: 88.39926556501419 - type: recall value: 91.78768152228342 task: type: BitextMining - dataset: config: nld_Latn-hun_Latn name: MTEB NTREXBitextMining (nld_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.49023535302955 - type: f1 value: 91.6190953096311 - type: main_score value: 91.6190953096311 - type: precision value: 90.72775830412286 - type: recall value: 93.49023535302955 task: type: BitextMining - dataset: config: pol_Latn-hun_Latn name: MTEB NTREXBitextMining (pol_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 91.28693039559339 - type: f1 value: 88.99515940577533 - type: main_score value: 88.99515940577533 - type: precision value: 87.9293940911367 - type: recall value: 91.28693039559339 task: type: BitextMining - dataset: config: por_Latn-hun_Latn name: MTEB NTREXBitextMining (por_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.03955933900852 - type: f1 value: 91.08496077449509 - type: main_score value: 91.08496077449509 - type: precision value: 90.17860123518612 - type: recall value: 93.03955933900852 task: type: BitextMining - dataset: config: rus_Cyrl-hun_Latn name: MTEB NTREXBitextMining (rus_Cyrl-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 90.98647971957938 - type: f1 value: 88.43932565514937 - type: main_score value: 88.43932565514937 - type: precision value: 87.2475379736271 - type: recall value: 90.98647971957938 task: type: BitextMining - dataset: config: spa_Latn-hun_Latn name: MTEB NTREXBitextMining (spa_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.23985978968453 - type: f1 value: 91.3386746786847 - type: main_score value: 91.3386746786847 - type: precision value: 90.43148055416457 - type: recall value: 93.23985978968453 task: type: BitextMining - dataset: config: swa_Latn-hun_Latn name: MTEB NTREXBitextMining (swa_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 35.95393089634452 - type: f1 value: 30.612257939034187 - type: main_score value: 30.612257939034187 - type: precision value: 28.995078568906944 - type: recall value: 35.95393089634452 task: type: BitextMining - dataset: config: swe_Latn-hun_Latn name: MTEB NTREXBitextMining (swe_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 93.64046069103655 - type: f1 value: 91.86613253213153 - type: main_score value: 91.86613253213153 - type: precision value: 91.04072775830413 - type: recall value: 93.64046069103655 task: type: BitextMining - dataset: config: tam_Taml-hun_Latn name: MTEB NTREXBitextMining (tam_Taml-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 29.04356534802203 - type: f1 value: 25.164093122029808 - type: main_score value: 25.164093122029808 - type: precision value: 23.849573878565543 - type: recall value: 29.04356534802203 task: type: BitextMining - dataset: config: tur_Latn-hun_Latn name: MTEB NTREXBitextMining (tur_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 90.83625438157236 - type: f1 value: 88.36087464530128 - type: main_score value: 88.36087464530128 - type: precision value: 87.19829744616925 - type: recall value: 90.83625438157236 task: type: BitextMining - dataset: config: vie_Latn-hun_Latn name: MTEB NTREXBitextMining (vie_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 90.68602904356536 - type: f1 value: 88.10882991153397 - type: main_score value: 88.10882991153397 - type: precision value: 86.90118511099983 - type: recall value: 90.68602904356536 task: type: BitextMining - dataset: config: zho_Hant-hun_Latn name: MTEB NTREXBitextMining (zho_Hant-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 90.1352028042063 - type: f1 value: 87.46035720247039 - type: main_score value: 87.46035720247039 - type: precision value: 86.19810668383528 - type: recall value: 90.1352028042063 task: type: BitextMining - dataset: config: zul_Latn-hun_Latn name: MTEB NTREXBitextMining (zul_Latn-hun_Latn) revision: ed9a4403ed4adbfaf4aab56d5b2709e9f6c3ba33 split: test type: mteb/NTREX metrics: - type: accuracy value: 17.1256885327992 - type: f1 value: 13.692538409811572 - type: main_score value: 13.692538409811572 - type: precision value: 12.811084017018844 - type: recall value: 17.1256885327992 task: type: BitextMining - dataset: config: rom-hun name: MTEB RomaTalesBitextMining (rom-hun) revision: f4394dbca6845743cd33eba77431767b232ef489 split: test type: kardosdrur/roma-tales metrics: - type: accuracy value: 6.046511627906977 - type: f1 value: 2.950830564784053 - type: main_score value: 2.950830564784053 - type: precision value: 2.295127353266888 - type: recall value: 6.046511627906977 task: type: BitextMining - dataset: config: hun_Latn name: MTEB SIB200Classification (hun_Latn) revision: a74d7350ea12af010cfb1c21e34f1f81fd2e615b split: test type: mteb/sib200 metrics: - type: accuracy value: 72.74509803921569 - type: f1 value: 71.6748881571977 - type: f1_weighted value: 72.7699432186266 - type: main_score value: 72.74509803921569 task: type: Classification - dataset: config: hun_Latn name: MTEB SIB200Classification (hun_Latn) revision: a74d7350ea12af010cfb1c21e34f1f81fd2e615b split: train type: mteb/sib200 metrics: - type: accuracy value: 71.92582025677605 - type: f1 value: 70.9175403606058 - type: f1_weighted value: 71.9988920000764 - type: main_score value: 71.92582025677605 task: type: Classification - dataset: config: hun_Latn name: MTEB SIB200Classification (hun_Latn) revision: a74d7350ea12af010cfb1c21e34f1f81fd2e615b split: validation type: mteb/sib200 metrics: - type: accuracy value: 66.76767676767676 - type: f1 value: 66.07599012119566 - type: f1_weighted value: 67.15823510190054 - type: main_score value: 66.76767676767676 task: type: Classification - dataset: config: hun_Latn name: MTEB SIB200ClusteringS2S (hun_Latn) revision: a74d7350ea12af010cfb1c21e34f1f81fd2e615b split: test type: mteb/sib200 metrics: - type: main_score value: 39.24288169703154 - type: v_measure value: 39.24288169703154 - type: v_measure_std value: 2.214708184335194 task: type: Clustering - dataset: config: hun-eng name: MTEB Tatoeba (hun-eng) revision: 69e8f12da6e31d59addadda9a9c8a2e601a0e282 split: test type: mteb/tatoeba-bitext-mining metrics: - type: accuracy value: 91.0 - type: f1 value: 88.47999999999999 - type: main_score value: 88.47999999999999 - type: precision value: 87.3 - type: recall value: 91.0 task: type: BitextMining tags: - mteb --- base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 language: - hu library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy - dot_accuracy - manhattan_accuracy - euclidean_accuracy - max_accuracy pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:857856 - loss:MultipleNegativesRankingLoss widget: - source_sentence: Emberek várnak a lámpánál kerékpárral. sentences: - Az emberek piros lámpánál haladnak. - Az emberek a kerékpárjukon vannak. - Egy fekete kutya úszik a vízben egy teniszlabdával a szájában - source_sentence: A kutya a vízben van. sentences: - Két férfi takarítja a havat a tetőről, az egyik egy emelőben ül, a másik pedig a tetőn. - A macska a vízben van, és dühös. - Egy kutya van a vízben, a szájában egy faág. - source_sentence: A nő feketét visel. sentences: - Egy barna kutya fröcsköl, ahogy úszik a vízben. - Egy tetoválással rendelkező nő, aki fekete tank tetején néz a földre. - 'Egy kékbe öltözött nő intenzív arckifejezéssel üti a teniszlabdát. A képen:' - source_sentence: Az emberek alszanak. sentences: - Három ember beszélget egy városi utcán. - A nő fehéret visel. - Egy apa és a fia ölelgeti alvás közben. - source_sentence: Az emberek alszanak. sentences: - Egy feketébe öltözött nő cigarettát és bevásárlótáskát tart a kezében, miközben egy idősebb nő átmegy az utcán. - Egy csoport ember ül egy nyitott, térszerű területen, mögötte nagy bokrok és egy sor viktoriánus stílusú épület, melyek közül sokat a kép jobb oldalán lévő erős elmosódás tesz kivehetetlenné. - Egy apa és a fia ölelgeti alvás közben. model-index: - name: paraphrase-multilingual-MiniLM-L12-hu-v1 results: - task: type: triplet name: Triplet dataset: name: all nli dev type: all-nli-dev metrics: - type: cosine_accuracy value: 0.992 name: Cosine Accuracy - type: dot_accuracy value: 0.0108 name: Dot Accuracy - type: manhattan_accuracy value: 0.9908 name: Manhattan Accuracy - type: euclidean_accuracy value: 0.9908 name: Euclidean Accuracy - type: max_accuracy value: 0.992 name: Max Accuracy - task: type: triplet name: Triplet dataset: name: all nli test type: all-nli-test metrics: - type: cosine_accuracy value: 0.9913636363636363 name: Cosine Accuracy - type: dot_accuracy value: 0.013939393939393939 name: Dot Accuracy - type: manhattan_accuracy value: 0.990909090909091 name: Manhattan Accuracy - type: euclidean_accuracy value: 0.9910606060606061 name: Euclidean Accuracy - type: max_accuracy value: 0.9913636363636363 name: Max Accuracy # paraphrase-multilingual-MiniLM-L12-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on the train dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) <!-- at revision ae06c001a2546bef168b9bf8f570ccb1a16aaa27 --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 384 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - train - **Language:** hu - **License:** apache-2.0 ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("karsar/paraphrase-multilingual-MiniLM-L12-hu_v1") # Run inference sentences = [ 'Az emberek alszanak.', 'Egy apa és a fia ölelgeti alvás közben.', 'Egy csoport ember ül egy nyitott, térszerű területen, mögötte nagy bokrok és egy sor viktoriánus stílusú épület, melyek közül sokat a kép jobb oldalán lévő erős elmosódás tesz kivehetetlenné.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 384] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Triplet * Dataset: `all-nli-dev` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:-------------------|:----------| | cosine_accuracy | 0.992 | | dot_accuracy | 0.0108 | | manhattan_accuracy | 0.9908 | | euclidean_accuracy | 0.9908 | | **max_accuracy** | **0.992** | #### Triplet * Dataset: `all-nli-test` * Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator) | Metric | Value | |:-------------------|:-----------| | cosine_accuracy | 0.9914 | | dot_accuracy | 0.0139 | | manhattan_accuracy | 0.9909 | | euclidean_accuracy | 0.9911 | | **max_accuracy** | **0.9914** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### train * Dataset: train * Size: 857,856 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 11.73 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.24 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.07 tokens</li><li>max: 53 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------------------------------|:----------------------------------------------|:---------------------------------------------------------------| | <code>Egy lóháton ülő ember átugrik egy lerombolt repülőgép felett.</code> | <code>Egy ember a szabadban, lóháton.</code> | <code>Egy ember egy étteremben van, és omlettet rendel.</code> | | <code>Gyerekek mosolyogva és integetett a kamera</code> | <code>Gyermekek vannak jelen</code> | <code>A gyerekek homlokot rántanak</code> | | <code>Egy fiú ugrál a gördeszkát a közepén egy piros híd.</code> | <code>A fiú gördeszkás trükköt csinál.</code> | <code>A fiú korcsolyázik a járdán.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Evaluation Dataset #### train * Dataset: train * Size: 5,000 evaluation samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 1000 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 7 tokens</li><li>mean: 11.73 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.24 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.07 tokens</li><li>max: 53 tokens</li></ul> | * Samples: | anchor | positive | negative | |:---------------------------------------------------------------------------|:----------------------------------------------|:---------------------------------------------------------------| | <code>Egy lóháton ülő ember átugrik egy lerombolt repülőgép felett.</code> | <code>Egy ember a szabadban, lóháton.</code> | <code>Egy ember egy étteremben van, és omlettet rendel.</code> | | <code>Gyerekek mosolyogva és integetett a kamera</code> | <code>Gyermekek vannak jelen</code> | <code>A gyerekek homlokot rántanak</code> | | <code>Egy fiú ugrál a gördeszkát a közepén egy piros híd.</code> | <code>A fiú gördeszkás trükköt csinál.</code> | <code>A fiú korcsolyázik a járdán.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `bf16`: True - `batch_sampler`: no_duplicates #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 128 - `per_device_eval_batch_size`: 128 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | train loss | all-nli-dev_max_accuracy | all-nli-test_max_accuracy | |:------:|:----:|:-------------:|:----------:|:------------------------:|:-------------------------:| | 0 | 0 | - | - | 0.7574 | - | | 0.0149 | 100 | 2.5002 | - | - | - | | 0.0298 | 200 | 1.9984 | - | - | - | | 0.0448 | 300 | 1.8094 | - | - | - | | 0.0597 | 400 | 1.6704 | - | - | - | | 0.0746 | 500 | 1.5518 | - | - | - | | 0.0895 | 600 | 1.449 | - | - | - | | 0.1044 | 700 | 1.5998 | - | - | - | | 0.1194 | 800 | 1.5725 | - | - | - | | 0.1343 | 900 | 1.5341 | - | - | - | | 0.1492 | 1000 | 1.3423 | - | - | - | | 0.1641 | 1100 | 1.2485 | - | - | - | | 0.1791 | 1200 | 1.1527 | - | - | - | | 0.1940 | 1300 | 1.1672 | - | - | - | | 0.2089 | 1400 | 1.2426 | - | - | - | | 0.2238 | 1500 | 1.0948 | - | - | - | | 0.2387 | 1600 | 1.0069 | - | - | - | | 0.2537 | 1700 | 0.976 | - | - | - | | 0.2686 | 1800 | 0.897 | - | - | - | | 0.2835 | 1900 | 0.7825 | - | - | - | | 0.2984 | 2000 | 0.9421 | 0.1899 | 0.9568 | - | | 0.3133 | 2100 | 0.8651 | - | - | - | | 0.3283 | 2200 | 0.8184 | - | - | - | | 0.3432 | 2300 | 0.699 | - | - | - | | 0.3581 | 2400 | 0.6704 | - | - | - | | 0.3730 | 2500 | 0.6477 | - | - | - | | 0.3879 | 2600 | 0.7077 | - | - | - | | 0.4029 | 2700 | 0.7364 | - | - | - | | 0.4178 | 2800 | 0.665 | - | - | - | | 0.4327 | 2900 | 1.2512 | - | - | - | | 0.4476 | 3000 | 1.3693 | - | - | - | | 0.4625 | 3100 | 1.3959 | - | - | - | | 0.4775 | 3200 | 1.4175 | - | - | - | | 0.4924 | 3300 | 1.402 | - | - | - | | 0.5073 | 3400 | 1.3832 | - | - | - | | 0.5222 | 3500 | 1.3671 | - | - | - | | 0.5372 | 3600 | 1.3666 | - | - | - | | 0.5521 | 3700 | 1.3479 | - | - | - | | 0.5670 | 3800 | 1.3272 | - | - | - | | 0.5819 | 3900 | 1.3353 | - | - | - | | 0.5968 | 4000 | 1.3177 | 0.0639 | 0.9902 | - | | 0.6118 | 4100 | 1.3068 | - | - | - | | 0.6267 | 4200 | 1.3054 | - | - | - | | 0.6416 | 4300 | 1.3098 | - | - | - | | 0.6565 | 4400 | 1.2839 | - | - | - | | 0.6714 | 4500 | 1.2976 | - | - | - | | 0.6864 | 4600 | 1.2669 | - | - | - | | 0.7013 | 4700 | 1.208 | - | - | - | | 0.7162 | 4800 | 1.194 | - | - | - | | 0.7311 | 4900 | 1.1974 | - | - | - | | 0.7460 | 5000 | 1.1834 | - | - | - | | 0.7610 | 5100 | 1.1876 | - | - | - | | 0.7759 | 5200 | 1.1743 | - | - | - | | 0.7908 | 5300 | 1.1839 | - | - | - | | 0.8057 | 5400 | 1.1778 | - | - | - | | 0.8207 | 5500 | 1.1711 | - | - | - | | 0.8356 | 5600 | 1.1809 | - | - | - | | 0.8505 | 5700 | 1.1825 | - | - | - | | 0.8654 | 5800 | 1.1795 | - | - | - | | 0.8803 | 5900 | 1.1788 | - | - | - | | 0.8953 | 6000 | 1.1819 | 0.0371 | 0.992 | - | | 0.9102 | 6100 | 1.1741 | - | - | - | | 0.9251 | 6200 | 1.1871 | - | - | - | | 0.9400 | 6300 | 0.498 | - | - | - | | 0.9549 | 6400 | 0.093 | - | - | - | | 0.9699 | 6500 | 0.1597 | - | - | - | | 0.9848 | 6600 | 0.2033 | - | - | - | | 0.9997 | 6700 | 0.16 | - | - | - | | 1.0 | 6702 | - | - | - | 0.9914 | ### Framework Versions - Python: 3.11.8 - Sentence Transformers: 3.1.1 - Transformers: 4.44.0 - PyTorch: 2.3.0.post101 - Accelerate: 0.33.0 - Datasets: 2.18.0 - Tokenizers: 0.19.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* --> ---
deepnet/SN9-C4-llama-HK5-1
deepnet
2024-10-28T18:41:14Z
185
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "transformers.js", "tokenizers", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-27T18:40:43Z
--- library_name: transformers tags: - transformers.js - tokenizers --- # GPT-4 Tokenizer A 🤗-compatible version of the **GPT-4 tokenizer** (adapted from [openai/tiktoken](https://github.com/openai/tiktoken)). This means it can be used with Hugging Face libraries including [Transformers](https://github.com/huggingface/transformers), [Tokenizers](https://github.com/huggingface/tokenizers), and [Transformers.js](https://github.com/xenova/transformers.js). ## Example usage: ### Transformers/Tokenizers ```py from transformers import GPT2TokenizerFast tokenizer = GPT2TokenizerFast.from_pretrained('Xenova/gpt-4') assert tokenizer.encode('hello world') == [15339, 1917] ``` ### Transformers.js ```js import { AutoTokenizer } from '@xenova/transformers'; const tokenizer = await AutoTokenizer.from_pretrained('Xenova/gpt-4'); const tokens = tokenizer.encode('hello world'); // [15339, 1917] ```
mav23/ELYZA-japanese-Llama-2-7b-GGUF
mav23
2024-10-28T18:37:45Z
16
0
null
[ "gguf", "ja", "en", "arxiv:2307.09288", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-10-28T17:47:44Z
--- license: llama2 language: - ja - en --- ## ELYZA-japanese-Llama-2-7b ![ELYZA-Japanese-Llama2-image](./key_visual.png) ### Model Description **ELYZA-japanese-Llama-2-7b** は、 Llama2をベースとして日本語能力を拡張するために追加事前学習を行ったモデルです。 詳細は [Blog記事](https://note.com/elyza/n/na405acaca130) を参照してください。 ### Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer B_INST, E_INST = "[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。" text = "クマが海辺に行ってアザラシと友達になり、最終的には家に帰るというプロットの短編小説を書いてください。" model_name = "elyza/ELYZA-japanese-Llama-2-7b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto") if torch.cuda.is_available(): model = model.to("cuda") prompt = "{bos_token}{b_inst} {system}{prompt} {e_inst} ".format( bos_token=tokenizer.bos_token, b_inst=B_INST, system=f"{B_SYS}{DEFAULT_SYSTEM_PROMPT}{E_SYS}", prompt=text, e_inst=E_INST, ) with torch.no_grad(): token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") output_ids = model.generate( token_ids.to(model.device), max_new_tokens=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, ) output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1) :], skip_special_tokens=True) print(output) """ 承知しました。以下にクマが海辺に行ってアザラシと友達になり、最終的には家に帰るというプロットの短編小説を記述します。 クマは山の中でゆっくりと眠っていた。 その眠りに落ちたクマは、夢の中で海辺を歩いていた。 そこにはアザラシがいた。 クマはアザラシに話しかける。 「おはよう」とクマが言うと、アザラシは驚いたように顔を上げた。 「あ、こんにちは」アザラシは答えた。 クマはアザラシと友達になりたいと思う。 「私はクマと申します。」クマは... """ ``` ### ELYZA-japanese-Llama-2-7b Models | Model Name | Vocab Size | #Params | |:---------------------------------------------|:----------:|:-------:| |[elyza/ELYZA-japanese-Llama-2-7b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b)| 32000 | 6.27B | |[elyza/ELYZA-japanese-Llama-2-7b-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-instruct)| 32000 | 6.27B | |[elyza/ELYZA-japanese-Llama-2-7b-fast](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-fast)| 45043 | 6.37B | |[elyza/ELYZA-japanese-Llama-2-7b-fast-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-fast-instruct)| 45043 | 6.37B | ### Developers 以下アルファベット順 - [Akira Sasaki](https://huggingface.co/akirasasaki) - [Masato Hirakawa](https://huggingface.co/m-hirakawa) - [Shintaro Horie](https://huggingface.co/e-mon) - [Tomoaki Nakamura](https://huggingface.co/tyoyo) ### Licence Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ### How to Cite ```tex @misc{elyzallama2023, title={ELYZA-japanese-Llama-2-7b}, url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b}, author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura}, year={2023}, } ``` ### Citations ```tex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ThatsGroes/Llama-3.1-70B-Instruct-SkoleGPT
ThatsGroes
2024-10-28T18:33:00Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "dataset:kobprof/skolegpt-instruct", "base_model:meta-llama/Llama-3.1-70B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-70B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T11:10:44Z
--- base_model: meta-llama/Llama-3.1-70B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en datasets: - kobprof/skolegpt-instruct --- # Uploaded model - **Compute sponsored by:** Nvidia and Arrow ECS Denmark through Danish Data Science Community - **Developed by:** ThatsGroes - **License:** apache-2.0 - **Finetuned from model :** meta-llama/Llama-3.1-70B-Instruct LoRA adapter on Llama-3.1-70b loaded in 4-bit. Trained for 1 epoch with rank=lora_alpha=8 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. We ended up using 62.52 GB GPU memory (79.00%), of which 23.83 GB (30.12%) was used for LoRa. [codecarbon INFO @ 11:07:59] Energy consumed for RAM : 2.574882 kWh. RAM Power : 188.78840446472168 W [codecarbon INFO @ 11:07:59] Energy consumed for all GPUs : 4.045188 kWh. Total GPU Power : 270.22211938762564 W [codecarbon INFO @ 11:07:59] Energy consumed for all CPUs : 0.579916 kWh. Total CPU Power : 42.5 W [codecarbon INFO @ 11:07:59] 7.199986 kWh of electricity used since the beginning. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
adipanda/anime-chars-simpletuner-lora-1
adipanda
2024-10-28T18:28:25Z
6
4
diffusers
[ "diffusers", "flux", "flux-diffusers", "text-to-image", "simpletuner", "safe-for-work", "lora", "template:sd-lora", "lycoris", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-16T08:41:19Z
--- license: other base_model: "black-forest-labs/FLUX.1-dev" tags: - flux - flux-diffusers - text-to-image - diffusers - simpletuner - safe-for-work - lora - template:sd-lora - lycoris inference: true widget: - text: 'unconditional (blank prompt)' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_0_0.png - text: 'A scene from One Piece. Nami stands at the edge of a cliff, wind blowing through her hair, with the ocean far below. She''s wearing a purple tank top and white skirt, her eyes full of determination. Sanji Vinsmoke is behind her, looking away, leaning on a tree with a cigarette in hand. He''s dressed in his usual black suit with a blue dress shirt and black tie. The setting sun casts a golden glow over the rocky terrain and crashing waves below.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_1_0.png - text: 'A scene from Jujutsu Kaisen. Maki Zenin is standing on a beach during sunset holding a sign that says ''I love prompts''.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_2_0.png - text: 'A scene from Demon Slayer. Muzan Kibutsuji is standing in front of a large chalkboard filled with cryptic equations and diagrams. He''s wearing a white fedora, a white suit with a purple shirt underneath, and white gloves. He holds a piece of chalk in one hand and a test tube in the other, his eyes gleaming with focus. The warm light from nearby candles flickers against the shelves of mysterious bottles and ingredients in his lab, giving the scene an eerie ambiance.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_3_0.png - text: 'A scene from Attack on Titan. Levi Ackerman is sitting under a tree, holding the piece of paper close to his face as he reads intently' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_4_0.png - text: 'A scene from My Hero Academia. Shoto Todoroki stands on a cliff overlooking a vast desert, his hand outstretched and half of the cliffside covered in ice. He''s wearing his hero costume: a dark blue jacket with elbow-length sleeves, matching pants, a silver-colored combat vest, and white boots. His outfit contrasts against the bright, sandy terrain. The harsh sunlight creates long shadows across the rocky landscape, adding to the intensity of the moment.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_5_0.png - text: 'A scene from Demon Slayer. Tanjiro Kamado stands beside him, holding his sword at the ready. He''s wearing his Demon Slayer uniform: a dark jacket with gold trim over a white shirt, along with his signature green and black checkered haori fluttering in the wind. Shoto Todoroki stands in a dense forest, with a serious expression. He''s wearing his hero costume: a dark blue jacket with elbow-length sleeves, matching pants, a silver-colored combat vest, and white boots. His ice powers cover part of the forest floor.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_6_0.png - text: 'A scene from Attack on Titan. Mikasa Ackerman watches in disbelief. She''s wearing her Survey Corps uniform: a short, light brown jacket with the Wings of Freedom insignia, white pants, and dark brown boots. Her military uniform flutters in the wind as she stands ready with her ODM gear. Monkey D. Luffy is climbing the massive walls of Paradis Island, laughing as he pulls himself higher. He''s wearing his classic outfit: a red vest, blue shorts, and sandals, with his straw hat secured around his neck. The sky is clear, with just a few clouds drifting above, and the vast landscape stretches out below them.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_7_0.png - text: 'A scene from My Hero Academia. Makima stands in the middle of a quiet street, holding a test tube filled with a strange glowing liquid. She''s wearing her usual attire: a white button-up shirt, black tie, and dark pants, with her distinctive red hair tied back in a braid. Shoto Todoroki stands next to her, his hand raised as he is waving. He''s in his hero costume: a dark blue jacket with elbow-length sleeves, matching pants, a silver-colored combat vest, and white boots. The street is eerily still, with the remains of a destroyed building in the background, and the sun is setting, casting long shadows across the scene.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_8_0.png - text: 'A scene from One Piece. Levi Ackerman is on the deck of the Thousand Sunny, methodically cleaning his gear with a stern expression. He''s wearing his Survey Corps uniform: a short, light brown jacket with the Wings of Freedom insignia, white pants, and dark brown boots. Nami stands nearby, glancing over at him with a mix of curiosity and amusement, her arms crossed. She''s wearing a blue and white striped bikini top, low-rise jeans, and high-heeled sandals. The ship sails through calm waters under a bright blue sky, with a distant island just visible on the horizon.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_9_0.png - text: 'A scene from My Hero Academia. Shoto Todoroki stands in the middle of a training field, facing off against an opponent. Shoto Todoroki is in his hero costume: a dark blue jacket with elbow-length sleeves, matching pants, a silver-colored combat vest, and white boots. His Opponent, Ichigo Kurosaki stands opposite him, he is wearing his Shinigami robes: a black kimono and hakama, with a white sash around his waist. He holds his sword with a confident expression. The sky above is cloudy, with faint rays of sunlight piercing through.' parameters: negative_prompt: 'blurry, cropped, ugly' output: url: ./assets/image_10_0.png --- # anime-chars-simpletuner-lora-1 This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev). No validation prompt was used during training. None ## Validation settings - CFG: `3.5` - CFG Rescale: `0.0` - Steps: `20` - Sampler: `None` - Seed: `42` - Resolution: `1536x864` Note: The validation settings are not necessarily the same as the [training settings](#training-settings). You can find some example images in the following gallery: <Gallery /> The text encoder **was not** trained. You may reuse the base model text encoder for inference. ## Training settings - Training epochs: 382 - Training steps: 25200 - Learning rate: 0.001 - Effective batch size: 160 - Micro-batch size: 40 - Gradient accumulation steps: 1 - Number of GPUs: 4 - Prediction type: flow-matching - Rescaled betas zero SNR: False - Optimizer: adamw_bf16 - Precision: Pure BF16 - Quantised: Yes: int8-quanto - Xformers: Not used - LyCORIS Config: ```json { "algo": "lokr", "multiplier": 1.0, "linear_dim": 10000, "linear_alpha": 1, "factor": 12, "apply_preset": { "target_module": [ "Attention", "FeedForward" ], "module_algo_map": { "Attention": { "factor": 12 }, "FeedForward": { "factor": 6 } } } } ``` ## Datasets ### anime_char-512 - Repeats: 2 - Total number of images: ~3476 - Total number of aspect buckets: 1 - Resolution: 0.262144 megapixels - Cropped: False - Crop style: None - Crop aspect: None ## Inference ```python import torch from diffusers import DiffusionPipeline from lycoris import create_lycoris_from_weights model_id = 'black-forest-labs/FLUX.1-dev' adapter_id = 'pytorch_lora_weights.safetensors' # you will have to download this manually lora_scale = 1.0 wrapper, _ = create_lycoris_from_weights(lora_scale, adapter_id, pipeline.transformer) wrapper.merge_to() prompt = "An astronaut is riding a horse through the jungles of Thailand." pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') image = pipeline( prompt=prompt, num_inference_steps=20, generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826), width=1536, height=864, guidance_scale=3.5, ).images[0] image.save("output.png", format="PNG") ```
Sayankotor/llama-2-7b_exp_gen_30_32
Sayankotor
2024-10-28T18:24:34Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T14:01:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
khanhvy31/t5-training
khanhvy31
2024-10-28T18:19:51Z
135
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-base", "base_model:finetune:google-t5/t5-base", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-28T18:19:02Z
--- library_name: transformers license: apache-2.0 base_model: t5-base tags: - generated_from_trainer model-index: - name: t5-training results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-training This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7143 - Mse: 0.3397 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mse | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0303 | 1.0 | 392 | 0.7538 | 0.3615 | | 0.7436 | 2.0 | 784 | 0.7168 | 0.3378 | | 0.7185 | 3.0 | 1176 | 0.7143 | 0.3397 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
smiled0g/preflop_gto_small
smiled0g
2024-10-28T18:16:34Z
134
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T18:16:25Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: preflop_gto_small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/smiled0g/preflop_gto_small/runs/h5r8goj9) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/smiled0g/preflop_gto_small/runs/h5r8goj9) # preflop_gto_small This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 256 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.46.0 - Pytorch 2.1.1+cu121 - Datasets 3.0.2 - Tokenizers 0.20.1
mradermacher/Qwen-modelstock2-15B-i1-GGUF
mradermacher
2024-10-28T18:13:08Z
74
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:allknowingroger/Qwen-modelstock2-15B", "base_model:quantized:allknowingroger/Qwen-modelstock2-15B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-28T15:57:12Z
--- base_model: allknowingroger/Qwen-modelstock2-15B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/allknowingroger/Qwen-modelstock2-15B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Qwen-modelstock2-15B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 8.6 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 8.6 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 8.6 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/Qwen-modelstock2-15B-i1-GGUF/resolve/main/Qwen-modelstock2-15B.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
MaziyarPanahi/L3-Rhaenys-8B-GGUF
MaziyarPanahi
2024-10-28T18:03:00Z
52
0
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:tannedbum/L3-Rhaenys-8B", "base_model:quantized:tannedbum/L3-Rhaenys-8B", "region:us", "conversational" ]
text-generation
2024-10-28T17:37:16Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: L3-Rhaenys-8B-GGUF base_model: tannedbum/L3-Rhaenys-8B inference: false model_creator: tannedbum pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/L3-Rhaenys-8B-GGUF](https://huggingface.co/MaziyarPanahi/L3-Rhaenys-8B-GGUF) - Model creator: [tannedbum](https://huggingface.co/tannedbum) - Original model: [tannedbum/L3-Rhaenys-8B](https://huggingface.co/tannedbum/L3-Rhaenys-8B) ## Description [MaziyarPanahi/L3-Rhaenys-8B-GGUF](https://huggingface.co/MaziyarPanahi/L3-Rhaenys-8B-GGUF) contains GGUF format model files for [tannedbum/L3-Rhaenys-8B](https://huggingface.co/tannedbum/L3-Rhaenys-8B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
HappyAIUser/Amazing-16bit
HappyAIUser
2024-10-28T18:01:14Z
59
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-3B-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T17:52:24Z
--- base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** HappyAIUser - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
MaheshKumarK/gemmainstructwithcontext
MaheshKumarK
2024-10-28T17:55:30Z
5
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T11:39:47Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ANGELRC2/vit-model-upeu_sistemas_v2
ANGELRC2
2024-10-28T17:51:23Z
153
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-10-28T17:47:09Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: vit-model-upeu_sistemas_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-model-upeu_sistemas_v2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the AI-Lab-Makerere/beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0550 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1264 | 3.8462 | 500 | 0.0550 | 0.9850 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
lfcc/medlink-cross-encoder
lfcc
2024-10-28T17:51:15Z
105
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "cross-encoder", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T17:46:24Z
--- library_name: transformers tags: - cross-encoder --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
BrandonZYW/opt-2.7b-InBedder
BrandonZYW
2024-10-28T17:50:59Z
4
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "en", "dataset:KomeijiForce/Inbedder-Pretrain-Data", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-15T23:36:16Z
--- license: mit datasets: - KomeijiForce/Inbedder-Pretrain-Data language: - en --- # [ACL2024] Answer is All You Need: Instruction-following Text Embedding via Answering the Question InBedder🛌 is a text embedder that is designed to follow instructions. Instruction-following text embedder can capture characteristics of texts specified by user instructions. InBedder offers a novel viewpoint that treats the instruction as a question about the input text and encodes the expected answers to obtain the representation accordingly. We show that InBedder is aware of instructions with different evaluation tasks. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64323dd503d81fa4d26deaf9/jLbqF-2uT8Aw9DsN7XCVG.png) The following is a use case from [https://github.com/zhang-yu-wei/InBedder/blob/main/UseCase.ipynb](https://github.com/zhang-yu-wei/InBedder/blob/main/UseCase.ipynb) ```python import torch from torch import nn from torch.nn.functional import gelu, cosine_similarity from transformers import AutoTokenizer, AutoModel, AutoModelForMaskedLM import numpy as np class InBedder(): def __init__(self, path='KomeijiForce/inbedder-roberta-large', device='cuda:0'): model = AutoModelForMaskedLM.from_pretrained(path) self.tokenizer = AutoTokenizer.from_pretrained(path) self.model = model.roberta self.dense = model.lm_head.dense self.layer_norm = model.lm_head.layer_norm self.device = torch.device(device) self.model = self.model.to(self.device) self.dense = self.dense.to(self.device) self.layer_norm = self.layer_norm.to(self.device) self.vocab = self.tokenizer.get_vocab() self.vocab = {self.vocab[key]:key for key in self.vocab} def encode(self, input_texts, instruction, n_mask): if type(instruction) == str: prompts = [instruction + self.tokenizer.mask_token*n_mask for input_text in input_texts] elif type(instruction) == list: prompts = [inst + self.tokenizer.mask_token*n_mask for inst in instruction] inputs = self.tokenizer(input_texts, prompts, padding=True, truncation=True, return_tensors='pt').to(self.device) mask = inputs.input_ids.eq(self.tokenizer.mask_token_id) outputs = self.model(**inputs) logits = outputs.last_hidden_state[mask] logits = self.layer_norm(gelu(self.dense(logits))) logits = logits.reshape(len(input_texts), n_mask, -1) logits = logits.mean(1) logits = (logits - logits.mean(1, keepdim=True)) / logits.std(1, keepdim=True) return logits inbedder = InBedder(path='KomeijiForce/inbedder-roberta-large', device='cpu') texts = ["I love cat!", "I love dog!", "I dislike cat!"] instruction = "What is the animal mentioned here?" embeddings = inbedder.encode(texts, instruction, 3) cosine_similarity(embeddings[:1], embeddings[1:], dim=1) # tensor([0.9374, 0.9917], grad_fn=<SumBackward1>) texts = ["I love cat!", "I love dog!", "I dislike cat!"] instruction = "What is emotion expressed here?" embeddings = inbedder.encode(texts, instruction, 3) cosine_similarity(embeddings[:1], embeddings[1:], dim=1) # tensor([0.9859, 0.8537], grad_fn=<SumBackward1>) ```
BrandonZYW/opt-350m-InBedder
BrandonZYW
2024-10-28T17:50:41Z
86
0
transformers
[ "transformers", "safetensors", "opt", "text-generation", "en", "dataset:KomeijiForce/Inbedder-Pretrain-Data", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-02-16T00:03:03Z
--- license: mit datasets: - KomeijiForce/Inbedder-Pretrain-Data language: - en --- # [ACL2024] Answer is All You Need: Instruction-following Text Embedding via Answering the Question InBedder🛌 is a text embedder that is designed to follow instructions. Instruction-following text embedder can capture characteristics of texts specified by user instructions. InBedder offers a novel viewpoint that treats the instruction as a question about the input text and encodes the expected answers to obtain the representation accordingly. We show that InBedder is aware of instructions with different evaluation tasks. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64323dd503d81fa4d26deaf9/jLbqF-2uT8Aw9DsN7XCVG.png) The following is a use case from [https://github.com/zhang-yu-wei/InBedder/blob/main/UseCase.ipynb](https://github.com/zhang-yu-wei/InBedder/blob/main/UseCase.ipynb) ```python import torch from torch import nn from torch.nn.functional import gelu, cosine_similarity from transformers import AutoTokenizer, AutoModel, AutoModelForMaskedLM import numpy as np class InBedder(): def __init__(self, path='KomeijiForce/inbedder-roberta-large', device='cuda:0'): model = AutoModelForMaskedLM.from_pretrained(path) self.tokenizer = AutoTokenizer.from_pretrained(path) self.model = model.roberta self.dense = model.lm_head.dense self.layer_norm = model.lm_head.layer_norm self.device = torch.device(device) self.model = self.model.to(self.device) self.dense = self.dense.to(self.device) self.layer_norm = self.layer_norm.to(self.device) self.vocab = self.tokenizer.get_vocab() self.vocab = {self.vocab[key]:key for key in self.vocab} def encode(self, input_texts, instruction, n_mask): if type(instruction) == str: prompts = [instruction + self.tokenizer.mask_token*n_mask for input_text in input_texts] elif type(instruction) == list: prompts = [inst + self.tokenizer.mask_token*n_mask for inst in instruction] inputs = self.tokenizer(input_texts, prompts, padding=True, truncation=True, return_tensors='pt').to(self.device) mask = inputs.input_ids.eq(self.tokenizer.mask_token_id) outputs = self.model(**inputs) logits = outputs.last_hidden_state[mask] logits = self.layer_norm(gelu(self.dense(logits))) logits = logits.reshape(len(input_texts), n_mask, -1) logits = logits.mean(1) logits = (logits - logits.mean(1, keepdim=True)) / logits.std(1, keepdim=True) return logits inbedder = InBedder(path='KomeijiForce/inbedder-roberta-large', device='cpu') texts = ["I love cat!", "I love dog!", "I dislike cat!"] instruction = "What is the animal mentioned here?" embeddings = inbedder.encode(texts, instruction, 3) cosine_similarity(embeddings[:1], embeddings[1:], dim=1) # tensor([0.9374, 0.9917], grad_fn=<SumBackward1>) texts = ["I love cat!", "I love dog!", "I dislike cat!"] instruction = "What is emotion expressed here?" embeddings = inbedder.encode(texts, instruction, 3) cosine_similarity(embeddings[:1], embeddings[1:], dim=1) # tensor([0.9859, 0.8537], grad_fn=<SumBackward1>) ```
lfcc/medlink-bi-encoder
lfcc
2024-10-28T17:45:36Z
17
1
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1540", "loss:CosineSimilarityLoss", "base_model:neuralmind/bert-base-portuguese-cased", "base_model:finetune:neuralmind/bert-base-portuguese-cased", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-10-28T17:36:36Z
--- library_name: sentence-transformers tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:1540 - loss:CosineSimilarityLoss base_model: neuralmind/bert-base-portuguese-cased metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max widget: - source_sentence: A ascite quilosa é uma manifestação rara com um amplo diagnóstico diferencial. No adulto está sobretudo associada a casos de trauma, iatrogenia, neoplasias, doença hepática crónica e infeções micobacterianas. Os autores descrevem um caso raro de ascite quilosa como forma de apresentação de pericardite constritiva. sentences: - Um derrame pleuro-pericárdico acompanhado de febre geralmente sugere uma etiologia infecciosa. Quando episódios recorrentes ocorrem, sem isolamento de agente microbiológico, deve-se suspeitar de síndrome febril periódico, sendo a Febre Mediterrânea Familiar a mais frequente deste grupo. Febre Mediterrânea Familiar é uma doença autossómica recessiva, causada por mutações no gene MEFV e caracterizada por ataques recorrentes de febre e serosite. Os primeiros sintomas geralmente manifestam-se antes dos 20 anos de idade, sendo a dor abdominal o sintoma mais frequente. Neste artigo, iremos apresentar um caso de polisserosite febril recidivante como uma apresentação incomum de Febre Mediterrânea Familiar. - A pericardite constritiva (PC) consiste num compromisso da função cardíaca diastólica causado por um pericárdio fibrótico, inflamado ou calcificado, geralmente espessado. Os autores apresentam um caso de doente com polisserosite, cuja extensa investigação diagnóstica inicial, incluindo o ecocardiograma com doppler (ED) e a tomografia axial computorizada (TAC), não permitiram esclarecer a etiologia dos derrames, tendo o doente mantido ascite refractária apesar do tratamento médico. O gradiente sero-ascítico de albumina ≥ 1,1g/dL, o valor de proteínas no líquido ascítico > 2,5g/dL, o ingurgitamento jugular, bem como os antecedentes de derrames pericárdicos, levantaram a suspeita de PC. O diagnóstico foi apoiado pelo ED e pela TAC subsequentes e confirmado por cateterismo cardíaco. Perante um doente com polisserosite, a investigação diagnóstica deve ser orientada pelo exame citoquímico dos líquidos serosos. A PC é uma causa rara de ascite recorrente e estabelecer o diagnóstico constitui um desafio, sendo necessário um elevado índice de suspeição. - A Síndrome de Felty (SF) é caracterizada pela tríade artrite reumatóide (AR), neutropenia e esplenomegalia. É uma manifestação extra-articular rara da AR, presente em menos de 3% dos doentes, sendo mais frequente em mulheres e entre a 5ª e a 7ª décadas de vida. Na maioria dos casos surge, pelo menos, 10 anos após o diagnóstico da AR e associa-se a outras manifestações extra-articulares como vasculite, serosite ou adenopatias. Descrevemos um caso de uma mulher de 69 anos que se apresenta na consulta com neutropenia grave e sem qualquer outra sintomatologia acompanhante. Da investigação etiológica apurou-se altos títulos de fator reumatóide e Anti-CCP, associados a esplenomegalia, tendo sido feito o diagnóstico de SF, como apresentação inaugural de AR. Descrevemos este caso para realçar a importância da exclusão de causa auto-imune perante um doente com neutropenia ainda que sem clínica de artrite ou sinovite. - source_sentence: Os autores apresentam o caso de uma doente, 38 anos, sem antecedentes, admitida para investigação de derrame pleural. Toracocentese revelou hemotórax com exames bacteriológico, micobacteriológico e anatomo-patológico negativos. TAC toraco-abdomino-pélvico sugestiva de carcinomatose peritoneal, sem identificação de neoplasia primária. Biópsia de lesão superficial a nível pélvico compatível com endometriose. Laparoscopia diagnóstica com biopsia de lesões peritoneais também compatíveis com endometriose. Perante anatomia patológica e reaparecimento do derrame com novo ciclo menstrual admitiu-se endometriose torácica, tendo iniciado terapêutica supressora hormonal com resolução da sintomatologia. Os autores apresentam o caso clínico pela raridade e desafio diagnóstico que representa. A endometriose pulmonar caracteriza-se por tecido endometrial no parenquima pulmonar ou pleura e manifesta-se por pneumotorax, hemotorax ou hemoptises cíclicas catameniais. Os exames complementares são inespecíficos e o diagnóstico de exclusão, tendo em conta a história clínica e a natureza catamenial dos sintomas. O tratamento consiste inicialmente na supressão hormonal podendo necessitar de cirurgia. sentences: - Mulher de 64 anos, com antecedentes de Síndrome de Sjögren primário, recorre ao serviço de urgência por epigastralgias, vómitos, icterícia, colúria, acolia, prurido, anorexia e perda ponderal com 2 semanas de evolução. Objetivamente com dor à palpação no hipocôndrio direito e icterícia. Ecografia abdominal com dilatação das vias biliares intra e extra-hepáticas e tomografia computorizada e ressonância magnética com globosidade da área cefálica do pâncreas, lesões nodulares renais bilaterais, heterogeneidade do útero, nódulo da supra-renal e micronódulos pulmonares. Foi realizada biopsia renal guiada por TC que revelou linfoma não Hogdkin difuso de células B com elevado índice proliferativo. Estudo complementado por ecoendoscopia e CPRE confirmou envolvimento duodenal e papilar, condicionando estenose do terço distal da via biliar principal. Apresentamos este caso pela forma de apresentação rara com icterícia obstrutiva em doente com linfoma multifocal, de envolvimento extranodal exclusivo. O diagnóstico precoce e estadiamento célere são fatores determinantes no prognóstico. - Os autores apresentam o caso de uma paciente com síndrome de Klippel-Trenaunay, um síndrome neurocutâneo raro, de etiologia não esclarecida, que se caracteriza pela tríade clínica de hemangiomas cutâneos, insuficiência venosa e hipertrofia dos tecidos moles. A dor é o sintoma mais frequente relacionada com a insuficiência venosa crónica do membro afectado , mas poderão surgir complicações decorrentes da hipertrofia óssea e do aparecimento de malformações vasculares noutros locais. - Numerosas terapêuticas foram propostas na síndrome de secreção inadequada de hormona antidiurética (SIADH) refractária à restrição hídrica e dieta hipersalina, existindo raros casos descritos de SIADH de origem neurológica em que foi conseguido um controlo a longo prazo com fenitoína. Um homem de 48 anos, raça caucasiana, com antecedentes de etilismo crónico e história recente de traumatismo craniano com fractura do rochedo temporal direito é encaminhado ao Serviço de Urgência(SU) por crise convulsiva não presenciada e quadro confusional. Ao exame objectivo, o doente apresentava-se prostrado, desorientado e com períodos de agitação, sem sinais de depleção de volume. O restante exame físico e neurológico não revelou alterações relevantes. À admissão destacavam-se, analiticamente, níveis séricos de sódio de 120 mEq/l e, imagiologicamente, a tomografia crânio-encefálica revelou-se sobreponível a estudos anteriores. Outros exames complementares realizados, no SU, não mostraram alterações. Durante o internamento a abordagem diagnóstica permitiu o diagnóstico de SIADH, como complicação de uma fractura da base do crânio. Apesar da instituição de restrição hídrica e dieta hipersalina, o doente manteve o quadro confusional e hiponatrémia refractários. Face à etiologia da SIADH iniciou-se terapêutica com fenitoína conseguindo-se uma melhoria mantida do quadro clínico e atingimento de níveis normonatrémicos. - source_sentence: A hiponatremia é a alteração eletrolítica mais frequente na prática clínica hospitalar. Sendo muitas vezes devido a perdas ou iatrogenia farmacológica. A insuficiência primária da supra-renal é uma causa rara deste distúrbio e está muitas vezes relacionada com destruição auto-imune da glândula. Esta cursa, na maioria das vezes, com sintomas inespecíficos e de desenvolvimento insidioso. Por vezes os doentes não apresentam a tríade clássica de hipotensão, hiponatrémia e hiperpigmentação o que torna difícil o seu diagnóstico precoce. O diagnóstico correto e atempado permite oferecer ao doente um tratamento simples e crucial para a sua sobrevivência sentences: - Homem de 67 anos, internado no Serviço de Medicina por Pneumonia. Antecedentes de miocardiopatia dilatada, fibrilhação auricular, hipertensão arterial, alcoolismo crónico (80g/dia) e caquexia. No decurso do internamento desenvolveu um quadro de diminuição da força muscular de forma progressiva com tetraparésia grave, atrofia muscular de predomínio esquerdo, espasticidade e hiperreflexia dos membros inferiores. Analiticamente apresentava elevação dos parâmetros de colestase hepática, ionograma seriado com hiponatrémia discreta 132-135mEq/L, potássio, cloro, cálcio, fósforo e magnésio normais. Sem défice de vitamina B12 ou ácido fólico. Tomografia Computorizada Crânio-Encefálica sem alterações de natureza vascular ou expansiva. Punção lombar com análise do líquido cefalorraquídeo sem alterações. Serologias virais e bacterianas negativas. Eletromiograma sem lesão nervosa periférica. Foi então pedida Ressonância Magnética Crânio-Encefálica e Cervical para exclusão de lesão desmielinizante cervical alta ou do tronco cerebral, tendo-se verificado hipersinal em T2 a nível da ponte característica da Mielinólise Central Pontina. - A Doença de Still é uma doença auto-inflamatória rara, sendo um dos diagnósticos diferenciais de febre de origem indeterminada. A apresentação típica inclui febre, rash evanescente e artrite acompanhada de valores desproporcionalmente elevados de ferritina. Apresentamos um caso de diagnóstico particularmente difícil numa mulher de 44 anos com envolvimento cutâneo, articular e pulmonar, na qual os valores de ferritina estavam apenas moderadamente elevados, mas a sua forma glicosilada significativamente reduzida. No decorrer da investigação foi identificada doença celíaca concomitante, com défice de ferro profundo, que apontou para uma possível alteração no mecanismo de produção de ferritina na presença de um estímulo inflamatório. Este caso sublinha a relevância da ferritina glicosilada como marcador mais fiável na investigação de casos onde a Doença de Still é suspeita. - Resumo Os linfomas que envolvem o colo do útero são muito raros. Relatamos o caso de uma mulher de 71 anos apresentando sintomas de diverticulite, com vários achados imagiológicos incidentais sugerindo uma doença linfoproliferativa e uma grande massa no colo do útero. A biópsia profunda do colo do útero diagnosticou um linfoma difuso de grandes células B envolvendo o colo do útero, provável transformação de um linfoma de zona marginal. A doente está atualmente em tratamento com rituximab, ciclofosfamida, doxorrubicina, vincristina e predisolona e metotrexato em altas doses para profilaxia de envolvimento do sistema nervoso central. Para diagnosticar com precisão um linfoma não-Hodgkin do colo do útero, a equipa médica deve estar atenta a esta hipótese diagnóstica clínica, a fim de proporcionar as melhores condições para a investigação, como biópsia profunda do colo do útero e estudos histológicos e imuno-histoquímicos da amostra. - source_sentence: A Arterite de Takayasu é uma doença inflamatória crónica dos grandes vasos, que envolve a artéria aorta e os seus ramos principais, e afecta predominantemente mulheres com idade inferior a 40 anos. A clínica é inespecífica e varia com o local anatómico envolvido, pelo que é necessário um elevado índice de suspeição clínica para que seja realizado o seu diagnóstico. O acidente vascular cerebral tem uma prevalência de cerca de 10 a 20% no decurso da doença e influencia de forma negativa o seu prognóstico. O acidente vascular cerebral hemorrágico como manifestação da Arterite de Takayasu é raro. Apresentamos o caso de uma doente jovem que se apresenta com uma hemorragia cerebral, cuja investigação etiológica culminou no diagnóstico de Arterite de Takayasu. A importância desde caso clínico prende-se com a escassez de casos publicados na literatura, uma vez que retrata uma patologia rara, com uma apresentação inicial invulgar. sentences: - Resumo Aproximadamente 5%-10% dos acidentes vasculares cerebrais (AVC) criptogénicos têm uma neoplasia subjacente. A parésia do nervo abducente em doentes com neoplasia encontra-se geralmente relacionada com compressão tumoral, hipertensão intracraniana ou metastização. Os autores reportam um caso de um doente com 65 anoscom AVC multiterritório que se apresentou com uma parésia do sexto nervo unilateral e isolada cuja etiologia foi extensamente estudada. Admitiu-se o diagnóstico final de síndrome paraneoplásico, que foi a apresentação inicial de um carcinoma gástrico oculto provavelmente relacionado com a hipercoagulabilidade associada à malignidade. Este caso enfatiza a importância de considerar um estudoadicional em casos selecionados de AVC criptogénico ou parésia do abducente. - As encefalites virais são entidades raras, mas que, pelas suas implicações diagnósticas, terapêuticas e prognósticas, não podem deixar de ser consideradas em qualquer doente que se apresente com sintomas psiquiátricos, alteração do estado de consciência, convulsões ou coma sem causa evidente. O presente caso diz respeito a um doente com sintomas psicóticos e um estado confusional com duas semanas de evolução. À admissão, apresentava-se subfebril, com flutuação do nível de consciência. O estudo analítico e TAC crânio-encefálica não mostraram alterações de relevo, tendo realizado punção lombar cujo exame citoquímico e exame bacteriológico se mostravam igualmente inalterados. Por suspeita mantida de encefalite viral e não sendo possível excluir causa herpética, foi iniciada terapêutica empírica com aciclovir. A PCR do vírus Epstein-Barr (EBV) no líquor foi positiva, permitindo assim o diagnóstico raro de uma encefalite a EBV num doente idoso e imunocompetente, tendo-se verificado resolução completa do quadro clínico. - A abordagem da febre é sem dúvida uma das artes da Medicina. A doença de Still no adulto (DSA) é uma patologia inflamatória sistémica de baixa incidência e etiologia desconhecida. Pela inespecificidade clínica e laboratorial, é um diagnóstico de exclusão. Os autores descrevem o caso de homem de 32 anos com a tríade de febre, oligoartralgia e exantema cutâneo evanescente, cuja marcha diagnóstica minuciosa culminou no diagnóstico de DSA, apresentando hiperferritinémia sérica dez vezes superior ao normal. Relembra-se a importância da DSA como causa de síndrome febril arrastado, cujo diagnóstico, atendendo à ausência de marcadores patognomónicos, pode passar despercebido. - source_sentence: A síndrome da Secreção Inapropriada da Hormona Antidiurética (SIADH) é uma das causas de hiponatremia euvolémica. A hidrocefalia de pressão normal (HPN) pode ser uma causa neurológica para SIADH e o seu diagnóstico e correção são fundamentais para a normalização dos níveis de sódio. Relatamos o caso de uma mulher de 67 anos, com hiponatremia crónica, marcha de base alargada, urgência miccional e sensação de perda de memória, sem evidência de sobrecarga hídrica ou desidratação. O estudo complementar revelou osmolaridade sérica normal, osmolaridade urinária elevada, sódio urinário elevado. Após restrição hídrica, houve melhoria da hiponatremia. Imagiologicamente documentou-se presença de membrana aqueductal causando obstrução ao fluxo do líquido cefalorraquidiano. O diagnóstico de SIADH em contexto de HPN foi presumido. Após correção cirúrgica houve resolução completa da hiponatremia. Hoje sabe-se que existem formas secundárias raras de HPN, sendo estas causadas por estenose ou obstrução aqueductal, como relatado no caso apresentado. sentences: - Define-se lesão hepática induzida por um fármaco como uma lesão hepática que, após exclusão de outras potenciais etiologias, se assume como secundária a um fármaco, produto de ervanária ou xenobiótico, e que resulta em alterações da enzimologia hepática ou disfunção hepática clinicamente evidente. Os autores descrevem o caso de um homem de 87 anos internado para estudo etiológico de uma lesão hepática de padrão colestático. Após estudo alargado, foi colocada como hipótese etiológica mais provável uma iatrogenia farmacológica, posteriormente corroborada por biópsia hepática, sendo a Espironolactona assumida como o agente causal mais provável, atendendo ao quadro clínico e aos achados histopatológicos. Estão descritos alguns casos de lesão hepática induzida pela Espironolactona, quando usada em doses de 50 e 100 mg/dia. Os autores relatam um caso raro que ocorreu num doente que se encontrava sob Espironolactona na dose de 25 mg/dia. - Resumo A ceftriaxona, um dos antibióticos mais frequentementeutilizados na prática clínica, tem como efeito adverso, raro epotencialmente grave, a agranulocitose. Reportamos um caso de uma mulher de 85 anos em esquema terapêutico prolongado com ceftriaxona para endocardite por Streptococcus bovis, que desenvolve agranulocitose ao 25º dia de antibioterapia, com nadir de contagem absoluta de neutrófilos de 0/uL. Outras causas potenciais foram excluídas. A terapêutica antibiótica foi alterada para amoxicilina/ácido clavulânico e realizou ciclo de fator estimulador de colónias de granulócitos, com resolução da neutropenia após 3 dias. Queremos destacar este efeito adverso raro com o uso prolongado da ceftriaxona,salientando a necessidade de monitorização regulardas contagens de leucócitos. O tratamento desta condiçãopassa pela suspensão do agente causal e o uso transitório de factor estimulador de colónias de granulócitos até resolução da neutropenia. - A síndrome de secreção inapropriada da hormona anti-diurética (SIADH) é uma causa frequente de hiponatrémia, sendo um diagnóstico de exclusão. Quando associada à infeção pelo vírus varicella zoster é mais frequente na sua forma disseminada. Os autores descrevem o caso de uma mulher de 83 anos, com quadro com 7 dias de evolução de síndrome confusional flutuante, desorientação temporo-espacial e tonturas. Medicada com brivudina, aciclovir tópico e ofloxacina gotas para tratamento de herpes zóster com atingimento dos ramos oftálmico e mandibular do nervo trigémeo. À admissão, com hiponatrémia de 128mmol/L. Excluídas outras causas, assumiu-se o diagnóstico de SIADH associado a infeção por herpes. O caso descrito sugere uma relação causal entre a reactivação por VZV e a SIADH sintomática. A favor, temos a resolução completa da hiponatrémia a acompanhar a melhoria clínica. O presente caso torna-se importante por se tratar de uma entidade rara, pouco conhecida e subdiagnosticada, mas com efeitos clínicos importantes. pipeline_tag: sentence-similarity model-index: - name: SentenceTransformer based on neuralmind/bert-base-portuguese-cased results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: Unknown type: unknown metrics: - type: pearson_cosine value: 0.6875234896564695 name: Pearson Cosine - type: spearman_cosine value: 0.6855542083017127 name: Spearman Cosine - type: pearson_manhattan value: 0.6475708379913874 name: Pearson Manhattan - type: spearman_manhattan value: 0.6531511386527615 name: Spearman Manhattan - type: pearson_euclidean value: 0.6497495499262932 name: Pearson Euclidean - type: spearman_euclidean value: 0.6545105043371998 name: Spearman Euclidean - type: pearson_dot value: 0.6790094551137061 name: Pearson Dot - type: spearman_dot value: 0.6847710424836908 name: Spearman Dot - type: pearson_max value: 0.6875234896564695 name: Pearson Max - type: spearman_max value: 0.6855542083017127 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: sts test type: sts-test metrics: - type: pearson_cosine value: 0.6907882980083289 name: Pearson Cosine - type: spearman_cosine value: 0.6894513736041122 name: Spearman Cosine - type: pearson_manhattan value: 0.6492706768297136 name: Pearson Manhattan - type: spearman_manhattan value: 0.6546984498682096 name: Spearman Manhattan - type: pearson_euclidean value: 0.651318699091458 name: Pearson Euclidean - type: spearman_euclidean value: 0.6544106471290732 name: Spearman Euclidean - type: pearson_dot value: 0.6817298567055641 name: Pearson Dot - type: spearman_dot value: 0.6881836625714188 name: Spearman Dot - type: pearson_max value: 0.6907882980083289 name: Pearson Max - type: spearman_max value: 0.6894513736041122 name: Spearman Max - type: pearson_cosine value: 0.6907882980083289 name: Pearson Cosine - type: spearman_cosine value: 0.6894513736041122 name: Spearman Cosine - type: pearson_manhattan value: 0.6492706768297136 name: Pearson Manhattan - type: spearman_manhattan value: 0.6546984498682096 name: Spearman Manhattan - type: pearson_euclidean value: 0.651318699091458 name: Pearson Euclidean - type: spearman_euclidean value: 0.6544106471290732 name: Spearman Euclidean - type: pearson_dot value: 0.6817298567055641 name: Pearson Dot - type: spearman_dot value: 0.6881836625714188 name: Spearman Dot - type: pearson_max value: 0.6907882980083289 name: Pearson Max - type: spearman_max value: 0.6894513736041122 name: Spearman Max --- # SentenceTransformer based on neuralmind/bert-base-portuguese-cased This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) <!-- at revision 94d69c95f98f7d5b2a8700c420230ae10def0baa --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("lfcc/medlink-bi-encoder") # Run inference sentences = [ 'A síndrome da Secreção Inapropriada da Hormona Antidiurética (SIADH) é uma das causas de hiponatremia euvolémica. A hidrocefalia de pressão normal (HPN) pode ser uma causa neurológica para SIADH e o seu diagnóstico e correção são fundamentais para a normalização dos níveis de sódio. Relatamos o caso de uma mulher de 67 anos, com hiponatremia crónica, marcha de base alargada, urgência miccional e sensação de perda de memória, sem evidência de sobrecarga hídrica ou desidratação. O estudo complementar revelou osmolaridade sérica normal, osmolaridade urinária elevada, sódio urinário elevado. Após restrição hídrica, houve melhoria da hiponatremia. Imagiologicamente documentou-se presença de membrana aqueductal causando obstrução ao fluxo do líquido cefalorraquidiano. O diagnóstico de SIADH em contexto de HPN foi presumido. Após correção cirúrgica houve resolução completa da hiponatremia. Hoje sabe-se que existem formas secundárias raras de HPN, sendo estas causadas por estenose ou obstrução aqueductal, como relatado no caso apresentado.', 'A síndrome de secreção inapropriada da hormona anti-diurética (SIADH) é uma causa frequente de hiponatrémia, sendo um diagnóstico de exclusão. Quando associada à infeção pelo vírus varicella zoster é mais frequente na sua forma disseminada. Os autores descrevem o caso de uma mulher de 83 anos, com quadro com 7 dias de evolução de síndrome confusional flutuante, desorientação temporo-espacial e tonturas. Medicada com brivudina, aciclovir tópico e ofloxacina gotas para tratamento de herpes zóster com atingimento dos ramos oftálmico e mandibular do nervo trigémeo. À admissão, com hiponatrémia de 128mmol/L. Excluídas outras causas, assumiu-se o diagnóstico de SIADH associado a infeção por herpes. O caso descrito sugere uma relação causal entre a reactivação por VZV e a SIADH sintomática. A favor, temos a resolução completa da hiponatrémia a acompanhar a melhoria clínica. O presente caso torna-se importante por se tratar de uma entidade rara, pouco conhecida e subdiagnosticada, mas com efeitos clínicos importantes.', 'Resumo A ceftriaxona, um dos antibióticos mais frequentementeutilizados na prática clínica, tem como efeito adverso, raro epotencialmente grave, a agranulocitose. Reportamos um caso de uma mulher de 85 anos em esquema terapêutico prolongado com ceftriaxona para endocardite por Streptococcus bovis, que desenvolve agranulocitose ao 25º dia de antibioterapia, com nadir de contagem absoluta de neutrófilos de 0/uL. Outras causas potenciais foram excluídas. A terapêutica antibiótica foi alterada para amoxicilina/ácido clavulânico e realizou ciclo de fator estimulador de colónias de granulócitos, com resolução da neutropenia após 3 dias. Queremos destacar este efeito adverso raro com o uso prolongado da ceftriaxona,salientando a necessidade de monitorização regulardas contagens de leucócitos. O tratamento desta condiçãopassa pela suspensão do agente causal e o uso transitório de factor estimulador de colónias de granulócitos até resolução da neutropenia.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6875 | | **spearman_cosine** | **0.6856** | | pearson_manhattan | 0.6476 | | spearman_manhattan | 0.6532 | | pearson_euclidean | 0.6497 | | spearman_euclidean | 0.6545 | | pearson_dot | 0.679 | | spearman_dot | 0.6848 | | pearson_max | 0.6875 | | spearman_max | 0.6856 | #### Semantic Similarity * Dataset: `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6908 | | **spearman_cosine** | **0.6895** | | pearson_manhattan | 0.6493 | | spearman_manhattan | 0.6547 | | pearson_euclidean | 0.6513 | | spearman_euclidean | 0.6544 | | pearson_dot | 0.6817 | | spearman_dot | 0.6882 | | pearson_max | 0.6908 | | spearman_max | 0.6895 | #### Semantic Similarity * Dataset: `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.6908 | | **spearman_cosine** | **0.6895** | | pearson_manhattan | 0.6493 | | spearman_manhattan | 0.6547 | | pearson_euclidean | 0.6513 | | spearman_euclidean | 0.6544 | | pearson_dot | 0.6817 | | spearman_dot | 0.6882 | | pearson_max | 0.6908 | | spearman_max | 0.6895 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### spmi_dataset * Size: 1,540 training samples * Columns: <code>abstract1</code>, <code>abstract2</code>, and <code>score</code> * Approximate statistics based on the first 1000 samples: | | abstract1 | abstract2 | score | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:---------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 8 tokens</li><li>mean: 189.72 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 211.52 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.33</li><li>max: 1.0</li></ul> | * Samples: | abstract1 | abstract2 | score | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------| | <code>A dissecção aórtica aguda é uma emergência cardiovascular potencialmente fatal. É necessário um elevado grau de suspeição clínica para o seu diagnóstico, pois apresenta sintomas inespecíficos e mimetiza outras patologias. A maioria dos doentes tem dor torácica severa, com irradiação posterior e início abrupto, porém alguns são assintomáticos ou têm apresentações atípicas (cerca de 10%), que levam a diagnósticos tardios e a um pior prognóstico. A taxa de mortalidade é elevada, sendo superior a 50% se não for tratada. Apresenta-se o caso de um homem de 43 anos, admitido no serviço de urgência por dispneia de início súbito, sem dor torácica, uma apresentação rara de dissecção aórtica, com o objetivo de alertar para os fatores de risco e alterações do exame físico e nos exames auxiliares de diagnóstico da avaliação inicial que podem levantar a suspeita clínica e o diagnóstico precoce.</code> | <code>Resumo O baço possui funções imunológicas e hematológicas importantes. A esplenectomia está indicada na esferocitose hereditária, doença em que os eritrócitos são destruídos no baço por defeitos estruturais. Doentes esplenectomizados apresentam risco aumentado de infeção e de infeção fulminante pós-esplenectomia, que se caracteriza por um quadro inicial de febre, mialgias, cefaleia e vómitos. As bactérias Capnocytophaga colonizam a mucosa oral, podendo causar infeções oportunistas em doentes esplenectomizados. Os autores identificam o caso de um doente de 38 anos, esplenectomizado, que recorreu ao Serviço de Urgência por febre, vómitos e mialgias. As hemoculturas mostraram o crescimento de Capnocytophaga spp. Apesar das medidas instituídas, o doente evoluiu rapidamente para choque séptico, culminando na sua morte. Os autores pretendem alertar para esta condição rara associada a alta mortalidade, com o objetivo de aumentar a sobrevivência destes doentes, através da identificação e intervenção imediatas.</code> | <code>0.0</code> | | <code>A complexidade das doenças auto-imunes, caracterizadas por uma marcada heterogeneidade fenotípica e imunológica, tem o seu paradigma na sobreposição de perfis de auto-anticorpos e de manifestações clínicas de diferentes doenças num mesmo indivíduo. Os autores descrevem o caso de uma doente que, ao longo de doze anos de evolução de doença, cumpre critérios de classificação de quatro doenças auto-imunes diferentes, nomeadamente, Lúpus Eritematoso Sistémico, Esclerose Sistémica, Síndrome de Sjogrën e Colangite Biliar Primária. A sobreposição de perfis de auto-anticorpos, bem como de distintos fenótipos de diferentes doenças representam um desafio no diagnóstico, seguimento e tratamento destes doentes.</code> | <code>A esclerose sistémica (ES) é uma doença autoimune que pode afetar qualquer faixa etária, sendo pouco frequente após os 65 anos. O início da doença em idade geriátrica apresenta um fenótipo com diferentes aspetos quanto às manifestações clinicas, envolvimento orgânico e prognóstico. Descrevemos um caso clínico invulgar de uma doente com diagnóstico de ES estabelecido aos 87 anos, apresentando como manifestação inicial poliartralgias inflamatórias das mãos. O diagnóstico nesta faixa etária é particularmente desafiador, tendo sido estabelecido clinicamente e complementado com o resultado da capilaroscopia, apesar da doente apresentar auto-anticorpos específicos negativos. A doente realizou estudo do envolvimento visceral baseado em sintomas. Apesar da literatura descrever maior envolvimento orgânico na ES de inicio em idade avançada, a nossa doente não demonstrou marcado compromisso orgânico. A multidisciplinaridade envolvendo a Medicina Interna, a Reumatologia e a Fisiatria permitiram elaborar um plano terapêutico adequado, apresentando evolução clínica e funcional favorável.</code> | <code>0.65</code> | | <code>As enteropatias perdedoras de proteínas (EPP) caracterizam-se por uma perda proteica excessiva a nível do trato digestivo, podendo condicionar hipoproteinémia, edemas, bem como uma predisposição aumentada a infeções.1 As causas mais frequentes são a obstrução linfática, patologias gástricas, intestinais ou cardíacas. Neste caso clínico é descrito uma etiologia incomum de EPP, a pericardite constritiva (PC).2 Trata-se de um homem de 54 anos, com múltiplos internamentos por edemas generalizados e erisipelas de repetição, cuja investigação etiológica revelou uma EPP, causada por PC.</code> | <code>Resumo A enteropatia perdedora de proteínas (EPP) caracteriza-se pela presença de edema generalizado e hipoalbuminemiagrave, secundários à perda proteica através do trato gastrointestinal. Os autores reportam um caso de enteropatia perdedora de proteínas secundária a lupus eritematoso sistémico (LES), como a manifestação inicial desta doença. A doente relatava um quadro pautado por 4 meses de diarreia aquosa, não sanguinolenta, (com um máximo de 10 dejeções diárias), e perda ponderal significativa. Posteriormente desenvolveu marcado edema periférico e rash cutâneo malar e maculopapular ao nível do tórax e membros. Analiticamente apresentava anemia, hipoalbuminemia grave, hipocaliémia e hipomagnesémia. No decurso da investigação foram excluídas proteinúria eoutras causas de hipoalbuminemia. Após resultados como a pesquisa de anticorpos anti-nucleares e anti-ribonucleoproteinas positiva foi assumido o diagnóstico de EPP secundária ao LES. A doente foi tratada com pulsos de Metilprednisolona 1000 mg/dia durante 3 dias, seguido de prednisolona 1 mg/kg/dia, com boa resposta clínica. Após 20 dias, foi adicionada Azatioprina e iniciado o desmame de corticoides. O presente caso clínico destaca uma EPP como forma deapresentação do LES, cujo diagnóstico pode passar despercebido, tendo em conta a sua raridade, e acarretar um aumento da morbilidade e mortalidade.</code> | <code>0.65</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Evaluation Dataset #### spmi_dataset * Size: 386 evaluation samples * Columns: <code>abstract1</code>, <code>abstract2</code>, and <code>score</code> * Approximate statistics based on the first 386 samples: | | abstract1 | abstract2 | score | |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 9 tokens</li><li>mean: 193.97 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 203.56 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.33</li><li>max: 0.95</li></ul> | * Samples: | abstract1 | abstract2 | score | |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------| | <code>Resumo A síndrome de lise tumoral é a uma emergência médica potencialmente fatal decorrente da lise celular maciça que ocorre em neoplasias malignas com grande carga tumoral. Ocorre sobretudo em neoplasias hematológicas sob quimioterapia, sendo menos frequente em tumores sólidos, os quais apresentam geralmente um menor índice proliferativo. A síndrome de lise tumoral no carcinoma hepatocelular tratado com sorafenib, um inibidor oral multicinase, é extremamente rara, descrevendo-se apenas nove casos na literatura. Tanto quanto sabemos, não existem casos descritos na população europeia. Apresentamos um caso de síndrome de lise tumoral num doente com carcinoma hepatocelular multifocal sob tratamento com sorafenib e infeção SARS-CoV-2.</code> | <code>Resumo A púrpura trombocitopénica imune (PTI) é uma condição autoimune na qual anticorpos patogénicos se ligam às plaquetas, acelerando sua eliminação da circulação. Este caso é sobre uma mulher de 65 anos com fadiga, mialgias e púrpura cutânea localizada nas pernas, com início de sinais e sintomas 2 dias após vacinação com vacina SARS-CoV-2 da Moderna®. Um mês antes, a contagem de plaquetas era de 157x10^9/L. À admissão, a contagem de plaquetas era de 5x10^9/L, com trombocitopénia grave confirmada em esfregaço de sangue periférico. Recebeu prednisolona 1 mg/kg/dia. Após 7 dias, a contagem de plaquetas era de 45x10^9/L com resolução dos sintomas. Estudo de autoimunidade, hormonas tiroideias, coagulação, eletroforese de proteínas e testes sorológicos foram normais. Considerou-se provável relação causa-efeito da vacinação e aparecimento da clínica. O INFARMED considerou provável a relação com a vacina Moderna®, tratando-se do primeiro caso em Portugal.</code> | <code>0.85</code> | | <code>A cetoacidose diabética euglicemica (CADEu) é uma complicação potencialmente fatal da diabetes mellitus (DM), associada à medicação com inibidores do cotransportador sódio-glucose 2 (iSGLT2). Pode ser difícil de identificar devido à ausência de hiperglicemia. Homem com DM tipo 2, 71 anos, medicado com empagliflozina recorreu ao serviço de urgência por mal-estar geral e anúria. Estava prostrado, confuso, hipotenso, com respiração de Kussmaul. Analiticamente apresentou leucocitose, PCR de 202mg/dl, acidose metabólica grave com aumento do hiato aniónico, glicémia de 141 mg/dL e leucocitúria. Estes resultados poderiam ter sido interpretados no contexto infecioso urinário grave. Após consideração dos antecedentes medicamentosos e achados clínicos foi verificada uma cetonemia indoseavelmente alta que estabeleceu o diagnóstico de CADEu e permitiu início do tratamento dirigido com resolução da clínica. Os doentes medicados com iSGLT2 com doença aguda devem beneficiar de gasimetria arterial e medição da cetonemia de forma a garantir um diagnóstico precoce e tratamento atempado.</code> | <code>A sarcoidose é uma doença inflamatória sistémica caracterizada pela formação de granulomas não caseosos. Múltiplas podem ser as suas formas de manifestação clínica, sendo a síndroma de Heerfort-Waldenstrom uma forma de manifestação rara, encontrada em apenas 0.3% dos casos e caracterizada pelo aparecimento de parésia facial, tumefação parotídea, uveíte anterior e febre. Por vezes cursa com formas incompletas como no caso que descrevemos de uma mulher de 50 anos, sem antecedentes patológicos de relevo, que se apresenta com parésia e hipostesia da hemiface esquerda e disfagia para sólidos, tendo sido diagnosticada uma parésia facial periférica esquerda com exclusão imagiológica de evento neurológico vascular agudo. Foi medicada com deflazacorte e brivudina com melhoria da sintomatologia. Após término da corticoterapia retoma o quadro de disfagia, agora para sólidos e líquidos, parésia e hipostesia da hemiface direita com documentação ao exame objectivo de parésia facial periférica direita e hipertrofia parotídea bilateral. Analiticamente apresentava elevação sérica da enzima de conversão da angiotensina de 72.5U/L. A ressonância magnética cerebral demonstrava pequenas áreas de hipersinal em T2 na substância branca subcortical frontal, parietal direita, temporal esquerda e na transição caloso septal à esquerda, com líquor sem alterações citoquímicas. A TC toracoabdominopélvica mostrava múltiplas adenomegalias mediastínicas e hilares. A biópsia de um gânglio retro-auricular com retalhos de glândula salivar (parótida) evidenciava um processo inflamatório granulomatoso sem necrose caseosa, com imunofenotipagem sem alterações. O lavado broncoalveolar revelou linfocitose intensa e relação CD4/CD8 elevada (9.4). Foi iniciada corticoterapia e fisioterapia com melhoria da parésia facial e da clínica orofaríngea, sem recorrência. Relatamos assim um caso de neurosarcoidose sob a forma incompleta, pela ausência de atingimento ocular, de síndroma de Heefort-Waldenstrom.</code> | <code>0.0</code> | | <code>A hipertrofia ventricular esquerda no adulto, achado frequente e muitas vezes fortuito, pode dever-se a condições de sobrecarga de pressão ventricular, hipertrofia dos miócitos de causa genética ou acumulação patológica de substâncias intra ou extra-celulares. As implicações terapêuticas e prognósticas das várias etiologias são muito distintas pelo que se torna essencial a busca do diagnóstico específico. Apresenta-se um caso de hipertrofia ventricular esquerda assintomática que após uma marcha diagnóstica sistemática se revelou como miocardiopatia hipertrófica sarcomérica de início tardio. Por vários dos exames complementares de diagnóstico terem sido equívocos ou inconclusivos, é um caso demonstrativo de que, por vezes, só a abordagem completa e exaustiva permite chegar ao diagnóstico definitivo. Partindo de um exemplo real e tendo por base as recomendações da Sociedade Europeia de Cardiologia, esquematizou-se uma abordagem diagnóstica faseada desta patologia.</code> | <code>A síndrome Mounier-Kuhn é uma doença rara, caracterizada pela dilatação marcada da traqueia e brônquios, sem etiologia completamente esclarecida. Descrevemos o caso clínico de um homem de 48 anos de idade, com história prévia de infeções respiratórias de repetição de longa data, admitido no serviço de urgência com clínica compatível com nova infeção respiratória e elevação de parâmetros inflamatórios. A tomografia computorizada revelou achados sugestivos da síndrome em questão. O diagnóstico da Síndrome Mounier-Kuhn passa frequentemente despercebido sendo muitas vezes confundido com outras entidades. O seu diagnóstico é com frequência acidental e os exames radiológicos assumem um papel indispensável. O tratamento desta entidade é essencialmente de suporte.</code> | <code>0.0</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `num_train_epochs`: 10 - `warmup_ratio`: 0.1 - `fp16`: True - `load_best_model_at_end`: True #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 8 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: True - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional </details> ### Training Logs | Epoch | Step | Training Loss | Validation Loss | spearman_cosine | sts-test_spearman_cosine | |:----------:|:--------:|:-------------:|:---------------:|:---------------:|:------------------------:| | 0.5181 | 100 | 0.1677 | 0.1109 | 0.3495 | - | | 1.0363 | 200 | 0.0986 | 0.1124 | 0.3727 | - | | 1.5544 | 300 | 0.0742 | 0.1074 | 0.4131 | - | | 2.0725 | 400 | 0.068 | 0.0850 | 0.5223 | - | | 2.5907 | 500 | 0.0411 | 0.0816 | 0.5471 | - | | 3.1088 | 600 | 0.035 | 0.0766 | 0.5903 | - | | 3.6269 | 700 | 0.0197 | 0.0675 | 0.6320 | - | | 4.1451 | 800 | 0.0214 | 0.0697 | 0.6253 | - | | 4.6632 | 900 | 0.0117 | 0.0668 | 0.6467 | - | | 5.1813 | 1000 | 0.0101 | 0.0655 | 0.6491 | - | | 5.6995 | 1100 | 0.0066 | 0.0604 | 0.6800 | - | | 6.2176 | 1200 | 0.0057 | 0.0605 | 0.6776 | - | | 6.7358 | 1300 | 0.0037 | 0.0606 | 0.6765 | - | | 7.2539 | 1400 | 0.003 | 0.0603 | 0.6760 | - | | 7.7720 | 1500 | 0.0027 | 0.0587 | 0.6872 | - | | 8.2902 | 1600 | 0.0019 | 0.0588 | 0.6862 | - | | **8.8083** | **1700** | **0.0018** | **0.0584** | **0.6895** | **-** | | 9.3264 | 1800 | 0.0016 | 0.0587 | 0.6871 | - | | 9.8446 | 1900 | 0.0014 | 0.0589 | 0.6856 | - | | 10.0 | 1930 | - | - | - | 0.6895 | * The bold row denotes the saved checkpoint. <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Viscoke/Big1
Viscoke
2024-10-28T17:42:48Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T17:32:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
imhereforthememes/t5-small-fine-tuned_model_3
imhereforthememes
2024-10-28T17:34:15Z
115
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-27T18:45:30Z
--- library_name: transformers license: apache-2.0 base_model: t5-small tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-small-fine-tuned_model_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-fine-tuned_model_3 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1560 - Rouge1: 75.4228 - Rouge2: 70.7071 - Rougel: 74.0159 - Rougelsum: 74.2555 - Gen Len: 396.1667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 0.7692 | 10 | 1.7251 | 13.6123 | 5.7258 | 13.1801 | 13.1787 | 1027.0 | | No log | 1.5385 | 20 | 1.5442 | 23.3131 | 17.9737 | 23.3131 | 23.3131 | 1027.0 | | No log | 2.3077 | 30 | 1.3803 | 12.2977 | 5.1904 | 11.7431 | 11.5969 | 1027.0 | | No log | 3.0769 | 40 | 1.2344 | 13.8993 | 11.771 | 13.8993 | 14.0091 | 1027.0 | | No log | 3.8462 | 50 | 1.1516 | 13.9042 | 11.8938 | 13.9042 | 14.0103 | 1027.0 | | No log | 4.6154 | 60 | 0.9481 | 14.3687 | 10.4481 | 13.1919 | 13.0214 | 876.6667 | | No log | 5.3846 | 70 | 0.9286 | 27.0525 | 14.6747 | 24.9583 | 24.3756 | 857.5 | | No log | 6.1538 | 80 | 0.8804 | 21.6353 | 12.8534 | 18.4791 | 18.4306 | 877.1667 | | No log | 6.9231 | 90 | 0.7841 | 47.5579 | 30.341 | 42.5411 | 42.7482 | 550.5 | | No log | 7.6923 | 100 | 0.7793 | 35.2203 | 25.009 | 31.3145 | 30.6642 | 859.8333 | | No log | 8.4615 | 110 | 0.6860 | 37.2436 | 29.1438 | 33.1425 | 32.729 | 859.6667 | | No log | 9.2308 | 120 | 0.7150 | 29.122 | 23.7579 | 27.5853 | 26.5771 | 859.6667 | | No log | 10.0 | 130 | 0.6579 | 51.6814 | 37.1169 | 47.9067 | 47.9272 | 530.1667 | | No log | 10.7692 | 140 | 0.6267 | 37.5717 | 28.0617 | 32.827 | 32.592 | 860.5 | | No log | 11.5385 | 150 | 0.6118 | 62.1203 | 49.9121 | 55.4072 | 54.9256 | 564.0 | | No log | 12.3077 | 160 | 0.5481 | 61.2435 | 49.738 | 55.7893 | 55.6371 | 565.1667 | | No log | 13.0769 | 170 | 0.5685 | 57.4855 | 47.8398 | 54.6011 | 53.7537 | 407.8333 | | No log | 13.8462 | 180 | 0.5603 | 63.7808 | 52.0648 | 58.9732 | 59.1514 | 107.6667 | | No log | 14.6154 | 190 | 0.4906 | 56.541 | 43.5496 | 50.0309 | 49.4554 | 402.3333 | | No log | 15.3846 | 200 | 0.4920 | 44.085 | 31.8595 | 41.9242 | 42.2744 | 130.6667 | | No log | 16.1538 | 210 | 0.4519 | 57.8642 | 47.346 | 53.4872 | 53.7607 | 294.0 | | No log | 16.9231 | 220 | 0.4319 | 44.5213 | 29.3385 | 36.2116 | 36.1914 | 481.0 | | No log | 17.6923 | 230 | 0.4147 | 52.262 | 33.4537 | 42.1175 | 42.8641 | 335.6667 | | No log | 18.4615 | 240 | 0.4411 | 33.5609 | 21.155 | 26.7958 | 27.0263 | 785.1667 | | No log | 19.2308 | 250 | 0.3791 | 62.4765 | 48.3805 | 56.5917 | 55.6696 | 301.6667 | | No log | 20.0 | 260 | 0.3913 | 66.6348 | 54.4823 | 59.6097 | 59.9255 | 144.5 | | No log | 20.7692 | 270 | 0.3530 | 54.5169 | 46.9471 | 50.0583 | 49.4563 | 431.1667 | | No log | 21.5385 | 280 | 0.3245 | 46.7808 | 38.4793 | 42.4197 | 42.2085 | 712.3333 | | No log | 22.3077 | 290 | 0.3368 | 47.0382 | 35.8428 | 39.104 | 38.5503 | 735.8333 | | No log | 23.0769 | 300 | 0.3297 | 53.7986 | 44.0834 | 46.4405 | 47.5762 | 654.3333 | | No log | 23.8462 | 310 | 0.2940 | 59.8414 | 45.4853 | 53.3007 | 53.3967 | 155.8333 | | No log | 24.6154 | 320 | 0.3340 | 65.9 | 52.8727 | 59.5371 | 59.484 | 227.1667 | | No log | 25.3846 | 330 | 0.2812 | 58.7644 | 47.4464 | 51.8233 | 52.0057 | 302.5 | | No log | 26.1538 | 340 | 0.2787 | 64.4588 | 51.5866 | 56.3922 | 55.9368 | 219.8333 | | No log | 26.9231 | 350 | 0.2872 | 55.727 | 45.0152 | 49.2849 | 49.2548 | 601.0 | | No log | 27.6923 | 360 | 0.2971 | 63.8289 | 52.3489 | 57.6671 | 57.2489 | 361.0 | | No log | 28.4615 | 370 | 0.2893 | 60.4914 | 49.2527 | 54.2347 | 54.2306 | 174.1667 | | No log | 29.2308 | 380 | 0.2479 | 65.7383 | 53.9204 | 57.7956 | 58.0014 | 304.0 | | No log | 30.0 | 390 | 0.2452 | 58.2415 | 49.1706 | 49.5983 | 49.1554 | 630.1667 | | No log | 30.7692 | 400 | 0.2504 | 54.9945 | 42.7543 | 45.5489 | 46.7113 | 664.3333 | | No log | 31.5385 | 410 | 0.2361 | 62.8874 | 47.848 | 52.1486 | 52.6791 | 439.3333 | | No log | 32.3077 | 420 | 0.2282 | 35.307 | 20.7981 | 25.3321 | 25.7283 | 648.3333 | | No log | 33.0769 | 430 | 0.2268 | 39.9343 | 26.2938 | 32.5539 | 32.5389 | 464.8333 | | No log | 33.8462 | 440 | 0.2160 | 37.5551 | 29.1716 | 36.4583 | 36.3205 | 23.0 | | No log | 34.6154 | 450 | 0.2049 | 43.1026 | 33.2667 | 40.7167 | 40.7024 | 108.0 | | No log | 35.3846 | 460 | 0.2006 | 61.876 | 50.0227 | 53.1594 | 53.2425 | 502.6667 | | No log | 36.1538 | 470 | 0.1934 | 60.7038 | 50.0727 | 55.2509 | 54.8126 | 338.8333 | | No log | 36.9231 | 480 | 0.1960 | 70.3567 | 56.2927 | 61.7649 | 62.2948 | 358.6667 | | No log | 37.6923 | 490 | 0.1792 | 59.3192 | 42.9024 | 47.1844 | 47.5165 | 355.5 | | 0.5531 | 38.4615 | 500 | 0.1755 | 58.8161 | 44.5037 | 47.7178 | 47.6386 | 501.5 | | 0.5531 | 39.2308 | 510 | 0.1892 | 54.0773 | 43.7896 | 47.246 | 47.0727 | 440.1667 | | 0.5531 | 40.0 | 520 | 0.1821 | 57.2344 | 46.5657 | 52.5641 | 52.5542 | 589.1667 | | 0.5531 | 40.7692 | 530 | 0.1729 | 68.5089 | 53.586 | 60.131 | 60.3304 | 292.6667 | | 0.5531 | 41.5385 | 540 | 0.1989 | 63.9246 | 51.624 | 55.4652 | 55.8813 | 355.3333 | | 0.5531 | 42.3077 | 550 | 0.1868 | 60.7441 | 50.1997 | 55.0352 | 53.7644 | 564.3333 | | 0.5531 | 43.0769 | 560 | 0.1570 | 44.0831 | 33.923 | 37.6398 | 37.451 | 748.6667 | | 0.5531 | 43.8462 | 570 | 0.1806 | 60.5725 | 47.5269 | 52.2245 | 53.3507 | 487.8333 | | 0.5531 | 44.6154 | 580 | 0.1984 | 64.7623 | 56.5668 | 58.7952 | 59.3482 | 527.1667 | | 0.5531 | 45.3846 | 590 | 0.1673 | 62.8231 | 50.6443 | 53.4276 | 53.4813 | 385.8333 | | 0.5531 | 46.1538 | 600 | 0.1593 | 77.1493 | 70.2538 | 73.9133 | 74.0634 | 336.5 | | 0.5531 | 46.9231 | 610 | 0.1787 | 69.6579 | 57.144 | 62.8631 | 63.1825 | 264.1667 | | 0.5531 | 47.6923 | 620 | 0.1579 | 67.3991 | 55.4929 | 60.496 | 59.9907 | 237.5 | | 0.5531 | 48.4615 | 630 | 0.1510 | 55.7614 | 52.4735 | 54.2066 | 54.4553 | 351.3333 | | 0.5531 | 49.2308 | 640 | 0.1490 | 66.8343 | 59.1175 | 62.6098 | 62.6185 | 489.1667 | | 0.5531 | 50.0 | 650 | 0.1450 | 73.7447 | 68.8381 | 72.2138 | 71.7347 | 403.1667 | | 0.5531 | 50.7692 | 660 | 0.1435 | 73.4612 | 62.1625 | 67.6424 | 67.8374 | 335.0 | | 0.5531 | 51.5385 | 670 | 0.1412 | 69.9245 | 63.2467 | 67.5193 | 66.7139 | 459.3333 | | 0.5531 | 52.3077 | 680 | 0.1537 | 67.309 | 56.0056 | 60.5465 | 60.7674 | 483.3333 | | 0.5531 | 53.0769 | 690 | 0.1618 | 66.0585 | 54.5418 | 60.2616 | 59.8329 | 391.1667 | | 0.5531 | 53.8462 | 700 | 0.1546 | 62.9813 | 57.9394 | 61.4801 | 60.8618 | 532.5 | | 0.5531 | 54.6154 | 710 | 0.1768 | 69.2968 | 62.2167 | 65.5068 | 65.6779 | 463.5 | | 0.5531 | 55.3846 | 720 | 0.1523 | 70.6019 | 64.4629 | 68.7182 | 68.6705 | 468.3333 | | 0.5531 | 56.1538 | 730 | 0.1452 | 74.6336 | 70.8117 | 73.3083 | 73.5846 | 427.5 | | 0.5531 | 56.9231 | 740 | 0.1458 | 80.2581 | 73.4241 | 77.8048 | 78.2945 | 321.5 | | 0.5531 | 57.6923 | 750 | 0.1454 | 69.5709 | 60.7631 | 64.0057 | 64.1665 | 438.5 | | 0.5531 | 58.4615 | 760 | 0.1440 | 74.8974 | 70.6795 | 73.4561 | 73.6899 | 415.6667 | | 0.5531 | 59.2308 | 770 | 0.1420 | 75.8343 | 70.7545 | 74.2487 | 74.3303 | 370.8333 | | 0.5531 | 60.0 | 780 | 0.1518 | 68.975 | 60.6509 | 63.3542 | 63.4528 | 488.0 | | 0.5531 | 60.7692 | 790 | 0.1329 | 75.4609 | 65.9764 | 70.407 | 70.9722 | 379.6667 | | 0.5531 | 61.5385 | 800 | 0.1298 | 75.6475 | 67.6634 | 72.3407 | 72.6996 | 405.3333 | | 0.5531 | 62.3077 | 810 | 0.1324 | 76.1183 | 68.3992 | 73.0096 | 73.3558 | 379.3333 | | 0.5531 | 63.0769 | 820 | 0.1469 | 61.1852 | 57.2433 | 60.7155 | 60.4608 | 675.1667 | | 0.5531 | 63.8462 | 830 | 0.1385 | 68.2356 | 60.6576 | 63.8079 | 63.9332 | 513.1667 | | 0.5531 | 64.6154 | 840 | 0.1434 | 71.3804 | 66.5798 | 69.5366 | 69.5204 | 508.0 | | 0.5531 | 65.3846 | 850 | 0.1557 | 63.2252 | 59.4299 | 61.8559 | 61.89 | 537.0 | | 0.5531 | 66.1538 | 860 | 0.1489 | 74.2213 | 68.7578 | 72.1378 | 72.1929 | 472.1667 | | 0.5531 | 66.9231 | 870 | 0.1582 | 79.3572 | 72.5039 | 77.4724 | 77.8716 | 324.5 | | 0.5531 | 67.6923 | 880 | 0.1419 | 70.4109 | 65.0778 | 68.5519 | 68.6548 | 523.0 | | 0.5531 | 68.4615 | 890 | 0.1403 | 75.0692 | 67.5111 | 72.954 | 73.2228 | 379.3333 | | 0.5531 | 69.2308 | 900 | 0.1411 | 74.8948 | 66.439 | 72.7139 | 73.0614 | 383.0 | | 0.5531 | 70.0 | 910 | 0.1423 | 79.3572 | 71.8921 | 77.4724 | 77.8716 | 325.5 | | 0.5531 | 70.7692 | 920 | 0.1398 | 79.3572 | 72.135 | 77.4724 | 77.8716 | 325.5 | | 0.5531 | 71.5385 | 930 | 0.1376 | 75.2809 | 70.7071 | 73.6409 | 73.8805 | 410.0 | | 0.5531 | 72.3077 | 940 | 0.1440 | 75.7518 | 70.6157 | 74.2567 | 74.4963 | 381.0 | | 0.5531 | 73.0769 | 950 | 0.1434 | 80.9338 | 73.4733 | 78.7226 | 79.3074 | 319.1667 | | 0.5531 | 73.8462 | 960 | 0.1403 | 80.33 | 73.1042 | 78.1987 | 78.7715 | 321.0 | | 0.5531 | 74.6154 | 970 | 0.1393 | 75.7518 | 70.7071 | 74.2151 | 74.4547 | 377.6667 | | 0.5531 | 75.3846 | 980 | 0.1363 | 75.2169 | 70.6795 | 73.6694 | 73.9091 | 414.1667 | | 0.5531 | 76.1538 | 990 | 0.1392 | 75.7518 | 70.7639 | 74.5831 | 74.8227 | 371.6667 | | 0.0743 | 76.9231 | 1000 | 0.1457 | 75.8091 | 71.008 | 74.7065 | 74.9461 | 369.5 | | 0.0743 | 77.6923 | 1010 | 0.1476 | 75.6793 | 70.7662 | 74.2724 | 74.512 | 389.0 | | 0.0743 | 78.4615 | 1020 | 0.1504 | 74.9721 | 70.6949 | 73.5623 | 73.6876 | 419.8333 | | 0.0743 | 79.2308 | 1030 | 0.1488 | 74.9721 | 70.6949 | 73.5623 | 73.6876 | 419.8333 | | 0.0743 | 80.0 | 1040 | 0.1457 | 67.2012 | 63.9833 | 66.413 | 66.8448 | 518.6667 | | 0.0743 | 80.7692 | 1050 | 0.1411 | 75.0783 | 70.1206 | 73.56 | 73.7876 | 416.8333 | | 0.0743 | 81.5385 | 1060 | 0.1444 | 74.9181 | 70.6595 | 73.5353 | 73.7381 | 430.0 | | 0.0743 | 82.3077 | 1070 | 0.1661 | 75.252 | 70.7071 | 73.6151 | 73.8548 | 412.0 | | 0.0743 | 83.0769 | 1080 | 0.1686 | 75.7518 | 71.0652 | 74.5395 | 74.7791 | 367.6667 | | 0.0743 | 83.8462 | 1090 | 0.1691 | 75.0598 | 70.7071 | 73.5513 | 73.7701 | 417.1667 | | 0.0743 | 84.6154 | 1100 | 0.1678 | 74.9666 | 70.6637 | 73.4386 | 73.6548 | 423.0 | | 0.0743 | 85.3846 | 1110 | 0.1671 | 74.7224 | 70.4484 | 73.3686 | 73.5149 | 453.0 | | 0.0743 | 86.1538 | 1120 | 0.1656 | 75.7518 | 70.7717 | 74.3526 | 74.5922 | 378.3333 | | 0.0743 | 86.9231 | 1130 | 0.1643 | 75.7518 | 70.7717 | 74.3526 | 74.5922 | 378.5 | | 0.0743 | 87.6923 | 1140 | 0.1596 | 75.7518 | 70.7717 | 74.3526 | 74.5922 | 378.5 | | 0.0743 | 88.4615 | 1150 | 0.1592 | 75.2818 | 70.7071 | 73.7514 | 73.991 | 403.1667 | | 0.0743 | 89.2308 | 1160 | 0.1607 | 75.2883 | 70.7071 | 73.6474 | 73.887 | 410.3333 | | 0.0743 | 90.0 | 1170 | 0.1600 | 75.0598 | 70.7071 | 73.5513 | 73.7701 | 417.1667 | | 0.0743 | 90.7692 | 1180 | 0.1571 | 75.3879 | 70.7071 | 73.981 | 74.2206 | 397.0 | | 0.0743 | 91.5385 | 1190 | 0.1561 | 75.3966 | 70.7071 | 73.9896 | 74.2292 | 396.8333 | | 0.0743 | 92.3077 | 1200 | 0.1556 | 75.3794 | 70.7071 | 73.9724 | 74.2121 | 398.3333 | | 0.0743 | 93.0769 | 1210 | 0.1555 | 75.4228 | 70.7071 | 74.0159 | 74.2555 | 396.1667 | | 0.0743 | 93.8462 | 1220 | 0.1556 | 75.4228 | 70.7071 | 74.0159 | 74.2555 | 396.1667 | | 0.0743 | 94.6154 | 1230 | 0.1557 | 75.4228 | 70.7071 | 74.0159 | 74.2555 | 396.1667 | | 0.0743 | 95.3846 | 1240 | 0.1558 | 75.4228 | 70.7071 | 74.0159 | 74.2555 | 396.1667 | | 0.0743 | 96.1538 | 1250 | 0.1559 | 75.4228 | 70.7071 | 74.0159 | 74.2555 | 396.1667 | | 0.0743 | 96.9231 | 1260 | 0.1559 | 75.4228 | 70.7071 | 74.0159 | 74.2555 | 396.1667 | | 0.0743 | 97.6923 | 1270 | 0.1559 | 75.4228 | 70.7071 | 74.0159 | 74.2555 | 396.1667 | | 0.0743 | 98.4615 | 1280 | 0.1560 | 75.4228 | 70.7071 | 74.0159 | 74.2555 | 396.1667 | | 0.0743 | 99.2308 | 1290 | 0.1560 | 75.7518 | 70.9213 | 74.5831 | 74.8227 | 370.3333 | | 0.0743 | 100.0 | 1300 | 0.1560 | 75.4228 | 70.7071 | 74.0159 | 74.2555 | 396.1667 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.5.0+cu121 - Datasets 3.0.2 - Tokenizers 0.19.1
g-assismoraes/bbau-semeval25_fold5
g-assismoraes
2024-10-28T17:31:09Z
167
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:neuralmind/bert-base-portuguese-cased", "base_model:finetune:neuralmind/bert-base-portuguese-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T17:29:50Z
--- library_name: transformers license: mit base_model: neuralmind/bert-base-portuguese-cased tags: - generated_from_trainer model-index: - name: bbau-semeval25_fold5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bbau-semeval25_fold5 This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4468 - Precision Samples: 1.0 - Recall Samples: 0.0 - F1 Samples: 0.0 - Precision Macro: 1.0 - Recall Macro: 0.3333 - F1 Macro: 0.3333 - Precision Micro: 1.0 - Recall Micro: 0.0 - F1 Micro: 0.0 - Precision Weighted: 1.0 - Recall Weighted: 0.0 - F1 Weighted: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | No log | 1.0 | 5 | 0.6313 | 0.0420 | 0.2008 | 0.0680 | 0.4827 | 0.4955 | 0.2501 | 0.0427 | 0.2212 | 0.0715 | 0.3167 | 0.2212 | 0.0466 | | 0.642 | 2.0 | 10 | 0.5793 | 0.0496 | 0.1187 | 0.0683 | 0.7540 | 0.4216 | 0.3058 | 0.0519 | 0.1346 | 0.0749 | 0.6918 | 0.1346 | 0.0388 | | 0.642 | 3.0 | 15 | 0.5427 | 0.025 | 0.0167 | 0.02 | 0.9250 | 0.3561 | 0.3197 | 0.0233 | 0.0192 | 0.0211 | 0.8565 | 0.0192 | 0.0014 | | 0.553 | 4.0 | 20 | 0.5135 | 0.125 | 0.0083 | 0.0125 | 0.9701 | 0.3485 | 0.3342 | 0.0227 | 0.0096 | 0.0135 | 0.9714 | 0.0096 | 0.0005 | | 0.553 | 5.0 | 25 | 0.4909 | 0.925 | 0.0 | 0.0 | 0.9697 | 0.3333 | 0.3333 | 0.0 | 0.0 | 0.0 | 0.9712 | 0.0 | 0.0 | | 0.5015 | 6.0 | 30 | 0.4738 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3333 | 0.3333 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 0.5015 | 7.0 | 35 | 0.4620 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3333 | 0.3333 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 0.4718 | 8.0 | 40 | 0.4539 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3333 | 0.3333 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 0.4718 | 9.0 | 45 | 0.4488 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3333 | 0.3333 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 0.4578 | 10.0 | 50 | 0.4468 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3333 | 0.3333 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.20.1
g-assismoraes/bbau-semeval25_fold2
g-assismoraes
2024-10-28T17:27:33Z
165
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:neuralmind/bert-base-portuguese-cased", "base_model:finetune:neuralmind/bert-base-portuguese-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T17:26:36Z
--- library_name: transformers license: mit base_model: neuralmind/bert-base-portuguese-cased tags: - generated_from_trainer model-index: - name: bbau-semeval25_fold2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bbau-semeval25_fold2 This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4474 - Precision Samples: 1.0 - Recall Samples: 0.0 - F1 Samples: 0.0 - Precision Macro: 1.0 - Recall Macro: 0.3636 - F1 Macro: 0.3636 - Precision Micro: 1.0 - Recall Micro: 0.0 - F1 Micro: 0.0 - Precision Weighted: 1.0 - Recall Weighted: 0.0 - F1 Weighted: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | No log | 1.0 | 5 | 0.6293 | 0.0783 | 0.3868 | 0.1220 | 0.4983 | 0.5865 | 0.3159 | 0.0751 | 0.375 | 0.1252 | 0.3532 | 0.375 | 0.1464 | | 0.6408 | 2.0 | 10 | 0.5789 | 0.0787 | 0.2286 | 0.1079 | 0.7311 | 0.4717 | 0.3440 | 0.0839 | 0.2054 | 0.1192 | 0.5702 | 0.2054 | 0.0796 | | 0.6408 | 3.0 | 15 | 0.5425 | 0.0708 | 0.0583 | 0.0554 | 0.9220 | 0.3953 | 0.3740 | 0.0706 | 0.0536 | 0.0609 | 0.8686 | 0.0536 | 0.0258 | | 0.552 | 4.0 | 20 | 0.5135 | 0.1125 | 0.0271 | 0.0396 | 0.9759 | 0.3864 | 0.3719 | 0.0952 | 0.0357 | 0.0519 | 0.9634 | 0.0357 | 0.0110 | | 0.552 | 5.0 | 25 | 0.4912 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3636 | 0.3636 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 0.5007 | 6.0 | 30 | 0.4745 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3636 | 0.3636 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 0.5007 | 7.0 | 35 | 0.4624 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3636 | 0.3636 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 0.4713 | 8.0 | 40 | 0.4543 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3636 | 0.3636 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 0.4713 | 9.0 | 45 | 0.4493 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3636 | 0.3636 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 0.4567 | 10.0 | 50 | 0.4474 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3636 | 0.3636 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.20.1
Tsunami-th/Tsunami-1.0-14B-Instruct
Tsunami-th
2024-10-28T17:27:04Z
55
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "th", "en", "base_model:Qwen/Qwen2.5-14B", "base_model:finetune:Qwen/Qwen2.5-14B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-25T23:13:44Z
--- language: - th - en license: apache-2.0 library_name: transformers base_model: - Qwen/Qwen2.5-14B-Instruct - Qwen/Qwen2.5-14B pipeline_tag: text-generation --- <img src="./Tsunami.webp" alt="Tsunami Model" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Tsunami-1.0-14B-Instruct **TSUNAMI**: Transformative Semantic Understanding and Natural Augmentation Model for Intelligence. **TSUNAMI** full name was created by ChatGPT. --- ### infomation **Tsunami-1.0-14B-Instruct** is Thai Large Language Model that fine-tuned from **Qwen2.5-14B** in Thai dataset. --- ### Author - Pollakrit Lorprasertkul | [email protected] --- ### Performance Evaluation Below are the benchmark results of **Tsunami-1.0-14B-Instruct** compared to similar models in its class: | Model | Average | Thai Exam | M3Exam | | --- | --- | --- | --- | | Qwen2.5-14B-Instruct | 58.45 | 57.35 | 59.55 | | Meta-Llama-3.1-70B-Instruct | 59.38 | 58.23 | 60.52 | | llama-3-typhoon-v1.5x-70b-instruct | 59.34 | 58.76 | 59.92 | | openthaigpt1.5-14b-instruct | 60.41 | 58.41 | 62.41 | | **Tsunami-1.0-14B-Instruct** | **62.05** | **61.06** | **63.05** | --- ### Prompt Template This model uses `ChatML` prompt template: ``` <|im_start|>system {System}<|im_end|> <|im_start|>user {User}<|im_end|> <|im_start|>assistant {Assistant} ```` --- ### How to use ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name = "Tsunami-th/Tsunami-1.0-14B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "สวัสดีครับ"} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = tokenizer(text, return_tensors="pt") inputs = inputs.to(model.device) with torch.no_grad(): output = model.generate(**inputs, max_new_tokens=512) response = tokenizer.decode(output[0, len(inputs['input_ids'][0]):], skip_special_tokens=True) ``` ---
g-assismoraes/bbau-semeval25_fold1
g-assismoraes
2024-10-28T17:26:35Z
187
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:neuralmind/bert-base-portuguese-cased", "base_model:finetune:neuralmind/bert-base-portuguese-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T17:14:20Z
--- library_name: transformers license: mit base_model: neuralmind/bert-base-portuguese-cased tags: - generated_from_trainer model-index: - name: bbau-semeval25_fold1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bbau-semeval25_fold1 This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4537 - Precision Samples: 1.0 - Recall Samples: 0.0 - F1 Samples: 0.0 - Precision Macro: 1.0 - Recall Macro: 0.3939 - F1 Macro: 0.3939 - Precision Micro: 1.0 - Recall Micro: 0.0 - F1 Micro: 0.0 - Precision Weighted: 1.0 - Recall Weighted: 0.0 - F1 Weighted: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | No log | 1.0 | 5 | 0.6318 | 0.0483 | 0.2944 | 0.0758 | 0.5723 | 0.5505 | 0.2182 | 0.0462 | 0.2525 | 0.0781 | 0.6003 | 0.2525 | 0.0437 | | 0.6419 | 2.0 | 10 | 0.5807 | 0.0523 | 0.2259 | 0.0771 | 0.8251 | 0.4798 | 0.3760 | 0.0525 | 0.1717 | 0.0804 | 0.7656 | 0.1717 | 0.0321 | | 0.6419 | 3.0 | 15 | 0.5453 | 0.0705 | 0.2280 | 0.0983 | 0.8718 | 0.4621 | 0.3778 | 0.0721 | 0.1616 | 0.0997 | 0.8203 | 0.1616 | 0.0385 | | 0.5558 | 4.0 | 20 | 0.5173 | 0.0604 | 0.1301 | 0.0697 | 0.9280 | 0.4394 | 0.3705 | 0.06 | 0.0909 | 0.0723 | 0.9184 | 0.0909 | 0.0167 | | 0.5558 | 5.0 | 25 | 0.4962 | 0.0667 | 0.1051 | 0.0701 | 0.9460 | 0.4356 | 0.3896 | 0.0702 | 0.0808 | 0.0751 | 0.9259 | 0.0808 | 0.0273 | | 0.5084 | 6.0 | 30 | 0.4806 | 0.15 | 0.0 | 0.0 | 0.9545 | 0.3939 | 0.3788 | 0.0 | 0.0 | 0.0 | 0.9495 | 0.0 | 0.0 | | 0.5084 | 7.0 | 35 | 0.4688 | 0.65 | 0.0 | 0.0 | 0.9848 | 0.3939 | 0.3788 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 0.4795 | 8.0 | 40 | 0.4605 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3939 | 0.3939 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 0.4795 | 9.0 | 45 | 0.4555 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3939 | 0.3939 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 0.4666 | 10.0 | 50 | 0.4537 | 1.0 | 0.0 | 0.0 | 1.0 | 0.3939 | 0.3939 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.20.1
pxyyy/SmolLM-135M-epoch1
pxyyy
2024-10-28T17:24:46Z
168
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T17:22:30Z
--- library_name: transformers tags: [] --- ``` import torch import torch.nn as nn import torch.optim as optim import torch.nn.functional as F import torch.backends.cudnn as cudnn import torch.utils from torch.utils.data import Dataset, DataLoader import os import argparse # from ..utils import progress_bar import time import random import numpy as np import pickle import hashlib import io import torch.utils.data from tqdm import tqdm from transformers import AutoModelForCausalLM, DataCollatorForLanguageModeling, AutoTokenizer, LlamaForCausalLM from datasets import load_dataset from functools import partial import copy import wandb def tokenize(dp, tokenizer): inputs = tokenizer( dp['text'], # return_tensors="pt", max_length=128, truncation=True, padding=False )["input_ids"] inputs=inputs[:128] return {'input_ids': inputs, 'labels': copy.deepcopy(inputs)} if __name__ == '__main__': parser = argparse.ArgumentParser(description='PyTorch CIFAR10 Training') parser.add_argument('--lr', default=2e-5, type=float, help='learning rate') parser.add_argument('--batch-size', default=64, type=int, help='batch size') parser.add_argument('--model-ckpt', default=None, type=str, help='model checkpoint') parser.add_argument('--save', default=None, type=str, help='model checkpoint save dir') parser.add_argument('--epoch', default=1, type=int, help='number of epochs') parser.add_argument('--save_interval', default=5, type=int, help='model checkpoint saving interval') parser.add_argument('--pseudo_random', type=int, default=1234, help='pseudo random seed for all') args = parser.parse_args() if args.pseudo_random is not None: os.environ['PYTHONHASHSEED'] = '0' os.environ['TF_DETERMINISTIC_OPS'] = '1' random.seed(args.pseudo_random + 1) np.random.seed(args.pseudo_random + 1) torch.manual_seed(args.pseudo_random) torch.cuda.manual_seed(args.pseudo_random) torch.cuda.manual_seed_all(args.pseudo_random) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False print(f'set seed to {args.pseudo_random}') wandb.init( project='InfoScore', name='finetune-smol', config=args ) device = 'cuda' if torch.cuda.is_available() else 'cpu' best_acc = 0 # best test accuracy batch_size = args.batch_size # Data print('==> Preparing data..') raw_texts = load_dataset("tatsu-lab/alpaca", split='train') ds = raw_texts.map(lambda x: {'text': x['instruction']+x['input']+x['output']}) model=AutoModelForCausalLM.from_pretrained('HuggingFaceTB/SmolLM-135M', attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16).to(device) tokenizer=AutoTokenizer.from_pretrained('HuggingFaceTB/SmolLM-135M') tokenizer.pad_token = tokenizer.eos_token ds = ds.map(lambda x: tokenize(x, tokenizer)).remove_columns('instruction').remove_columns('input').remove_columns('output').remove_columns('text').remove_columns('labels') ds=ds.map(lambda x, idx: {'index': idx}, with_indices=True) print(ds[0]) train_data = torch.utils.data.Subset(ds, list(range(40000))) test_data = torch.utils.data.Subset(ds, list(range(40000, 52002))) # texts = torch.utils.data.Subset(raw_texts, list(range(40000, 52002))) train_loader = DataLoader( train_data, shuffle=False, collate_fn=DataCollatorForLanguageModeling(tokenizer, mlm=False, pad_to_multiple_of=8, return_tensors="pt"), num_workers=8, batch_size=batch_size) test_loader = DataLoader( test_data, shuffle=False, collate_fn=DataCollatorForLanguageModeling(tokenizer, mlm=False, pad_to_multiple_of=8, return_tensors="pt"), num_workers=8, batch_size=batch_size) optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=0.9, weight_decay=5e-4) scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=200) # Training def train(epoch): print('\nEpoch: %d' % epoch) st_time = time.time() model.train() train_loss = 0 # print(next(iter(trainloader))) for batch_idx, batch in enumerate(tqdm(train_loader)): optimizer.zero_grad() batch = batch.to(device) input_ids=batch['input_ids'] labels=batch['labels'] attn_mask=batch['attention_mask'] res_model = model(input_ids, labels=labels, attention_mask=attn_mask) loss = res_model.loss loss.backward() optimizer.step() train_loss += loss.item() duration=time.time()-st_time print('Epoch: %d | Train Loss: %.3f | Time: %ds' % (epoch, train_loss/(batch_idx+1), duration), flush=True) model.push_to_hub('pxyyy/SmolLM-135M-epoch1', use_temp_dir=True) tokenizer.push_to_hub('pxyyy/SmolLM-135M-epoch1', use_temp_dir=True) return train_loss/(batch_idx+1) def test(epoch): model.eval() test_loss = 0 with torch.no_grad(): for batch_idx, batch in enumerate(tqdm(test_loader)): batch = batch.to(device) input_ids=batch['input_ids'] labels=batch['labels'] attn_mask=batch['attention_mask'] outputs = model(input_ids, labels=labels, attention_mask=attn_mask) loss = outputs.loss test_loss += loss.item() print('Epoch: %d | Test Loss: %.3f ' % (epoch, test_loss/(batch_idx+1)), flush=True) # Save checkpoint. if epoch % args.save_interval == 0 and args.save is not None: print('Saving..') if not os.path.isdir(args.save): os.mkdir(args.save) torch.save(model.state_dict(), f'{args.save}/ckpt-{epoch}.pth') return test_loss/(batch_idx+1) for epoch in range(1, args.epoch+1): train_loss = train(epoch) test_loss = test(epoch) scheduler.step() wandb.log({'train/train_loss': train_loss, 'eval/test_loss': test_loss}) ``` `python3 resnet-cifar/finetune_smol.py` # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
smiled0g/preflop_gto_micro
smiled0g
2024-10-28T17:17:56Z
162
0
transformers
[ "transformers", "safetensors", "llama", "text-classification", "generated_from_trainer", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T17:17:53Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: preflop_gto_micro results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/smiled0g/preflop_gto_micro/runs/hcvjzcwd) # preflop_gto_micro This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 512 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.46.0 - Pytorch 2.1.1+cu121 - Datasets 3.0.2 - Tokenizers 0.20.1
RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf
RichardErkhov
2024-10-28T17:16:42Z
41
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-28T09:34:37Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MSM-MS-Cydrion-22B - GGUF - Model creator: https://huggingface.co/Steelskull/ - Original model: https://huggingface.co/Steelskull/MSM-MS-Cydrion-22B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [MSM-MS-Cydrion-22B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q2_K.gguf) | Q2_K | 7.7GB | | [MSM-MS-Cydrion-22B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q3_K_S.gguf) | Q3_K_S | 8.98GB | | [MSM-MS-Cydrion-22B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q3_K.gguf) | Q3_K | 10.02GB | | [MSM-MS-Cydrion-22B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q3_K_M.gguf) | Q3_K_M | 10.02GB | | [MSM-MS-Cydrion-22B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q3_K_L.gguf) | Q3_K_L | 10.92GB | | [MSM-MS-Cydrion-22B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.IQ4_XS.gguf) | IQ4_XS | 11.22GB | | [MSM-MS-Cydrion-22B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q4_0.gguf) | Q4_0 | 11.71GB | | [MSM-MS-Cydrion-22B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.IQ4_NL.gguf) | IQ4_NL | 11.83GB | | [MSM-MS-Cydrion-22B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q4_K_S.gguf) | Q4_K_S | 11.79GB | | [MSM-MS-Cydrion-22B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q4_K.gguf) | Q4_K | 12.43GB | | [MSM-MS-Cydrion-22B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q4_K_M.gguf) | Q4_K_M | 12.43GB | | [MSM-MS-Cydrion-22B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q4_1.gguf) | Q4_1 | 12.99GB | | [MSM-MS-Cydrion-22B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q5_0.gguf) | Q5_0 | 14.27GB | | [MSM-MS-Cydrion-22B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q5_K_S.gguf) | Q5_K_S | 14.27GB | | [MSM-MS-Cydrion-22B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q5_K.gguf) | Q5_K | 14.64GB | | [MSM-MS-Cydrion-22B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q5_K_M.gguf) | Q5_K_M | 14.64GB | | [MSM-MS-Cydrion-22B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q5_1.gguf) | Q5_1 | 15.56GB | | [MSM-MS-Cydrion-22B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q6_K.gguf) | Q6_K | 17.0GB | | [MSM-MS-Cydrion-22B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Steelskull_-_MSM-MS-Cydrion-22B-gguf/blob/main/MSM-MS-Cydrion-22B.Q8_0.gguf) | Q8_0 | 22.02GB | Original model description: --- base_model: - unsloth/Mistral-Small-Instruct-2409 - Steelskull/Merged-v2 - TheDrummer/Cydonia-22B-v1.1 - ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 - nbeerbower/Mistral-Small-Gutenberg-Doppel-22B - rAIfle/Acolyte-22B library_name: transformers tags: - merge license: apache-2.0 --- <!DOCTYPE html> <style> body { font-family: 'Quicksand', sans-serif; background: linear-gradient(135deg, #2E3440 0%, #1A202C 100%); color: #D8DEE9; margin: 0; padding: 0; font-size: 16px; } .container { width: 80% auto; max-width: 1080px auto; margin: 20px auto; background-color: rgba(255, 255, 255, 0.02); padding: 20px; border-radius: 12px; box-shadow: 0 4px 10px rgba(0, 0, 0, 0.2); backdrop-filter: blur(10px); border: 1px solid rgba(255, 255, 255, 0.1); } .header h1 { font-size: 28px; color: #ECEFF4; margin: 0 0 20px 0; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.3); } .update-section { margin-top: 30px; } .update-section h2 { font-size: 24px; color: #88C0D0; } .update-section p { font-size: 16px; line-height: 1.6; color: #ECEFF4; } .info img { width: 100%; border-radius: 10px; margin-bottom: 15px; } a { color: #88C0D0; text-decoration: none; } a:hover { color: #A3BE8C; } .button { display: inline-block; background-color: #5E81AC; color: #E5E9F0; padding: 10px 20px; border-radius: 5px; cursor: pointer; text-decoration: none; } .button:hover { background-color: #81A1C1; } pre { background-color: #2E3440; padding: 10px; border-radius: 5px; overflow-x: auto; } code { font-family: 'Courier New', monospace; color: #D8DEE9; } </style> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>MSM-MS-Cydrion-22B Data Card</title> <link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet"> </head> <body> <div class="container"> <div class="header"> <h1>MSM-MS-Cydrion-22B</h1> </div> <div class="info"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/P6Cdc590xEGjWH3rKXDe5.jpeg"> <p>Meet Cydrion, the attempt of fusion for creativity and intelligence.</p> <p><strong>Creator:</strong> <a href="https://huggingface.co/Steelskull" target="_blank">SteelSkull</a></p> <h1>About Cydrion-22B:</h1> <pre><code>Name Legend: MSM = Mistral-Small MS = Model Stock 22b = its 22b </code></pre> <p>This model merges the robust storytelling of Cydonia with the creative edge of Acolyte, ArliAI-RPMax, and Gutenberg with some special sauce. <p>Use Mistral Format</p> <h2>Quants:</h2> <p>My Quants:<a href="https://huggingface.co/SteelQuants/MSM-MS-Cydrion-22B-Q6_K-GGUF" target="_blank">MSM-MS-Cydrion-22B-Q6_K-GGUF</a></p> <h3>Config:</h3> <pre><code>MODEL_NAME = "MSM-MS-Cydrion-22B" yaml_config = """ base_model: Steelskull/Merged-v2 merge_method: model_stock dtype: bfloat16 models: - model: TheDrummer/Cydonia-22B-v1.1 - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 - model: nbeerbower/Mistral-Small-Gutenberg-Doppel-22B - model: rAIfle/Acolyte-22B """ </code></pre> <p><strong>If you wish to support:</strong></p> </div> <div class="donation-section"> <a href="https://ko-fi.com/Y8Y0AO2XE" target="_blank"> <img height="36" style="border:0px;height:36px;" src="https://storage.ko-fi.com/cdn/kofi2.png?v=3" border="0" alt="Buy Me a Coffee at ko-fi.com" /> </a> </div> </div> </div> </body> </html>
linoyts/yarn-art-30-37-prodigy
linoyts
2024-10-28T17:13:30Z
5
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "sd3.5-large", "sd3.5", "sd3.5-diffusers", "base_model:stabilityai/stable-diffusion-3.5-large", "base_model:adapter:stabilityai/stable-diffusion-3.5-large", "license:other", "region:us" ]
text-to-image
2024-10-28T16:50:07Z
--- base_model: stabilityai/stable-diffusion-3.5-large library_name: diffusers license: other tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - sd3.5-large - sd3.5 - sd3.5-diffusers - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - sd3.5-large - sd3.5 - sd3.5-diffusers instance_prompt: Frog, yarn art style widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SD3.5-Large DreamBooth LoRA - linoyts/yarn-art-30-37-prodigy <Gallery /> ## Model description These are linoyts/yarn-art-30-37-prodigy DreamBooth LoRA weights for stabilityai/stable-diffusion-3.5-large. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `Frog, yarn art style` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](linoyts/yarn-art-30-37-prodigy/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3.5-large, torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('linoyts/yarn-art-30-37-prodigy', weight_name='pytorch_lora_weights.safetensors') image = pipeline('Frog, yarn art style').images[0] ``` ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/linoyts/yarn-art-30-37-prodigy/blob/main/diffusers_lora_weights.safetensors)**. - Rename it and place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
knifeayumu/Cydonia-v1.2-Magnum-v4-22B-OLD
knifeayumu
2024-10-28T17:10:31Z
55
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:TheDrummer/Cydonia-22B-v1.2", "base_model:merge:TheDrummer/Cydonia-22B-v1.2", "base_model:anthracite-org/magnum-v4-22b", "base_model:merge:anthracite-org/magnum-v4-22b", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-10-28T16:54:42Z
--- base_model: - TheDrummer/Cydonia-22B-v1.2 - anthracite-org/magnum-v4-22b library_name: transformers tags: - mergekit - merge license: other license_name: mrl inference: false license_link: https://mistral.ai/licenses/MRL-0.1.md --- # Do not quant this model. ![Not Horny Enough](Cydonia-v1.2-Magnum-v4-22B.png) # Do not quant this model. It's already available. - GGUF (static): [mradermacher/Cydonia-v1.2-magnum-v4-22B-GGUF](https://huggingface.co/mradermacher/Cydonia-v1.2-magnum-v4-22B-GGUF) - GGUF (weighted/imatrix): [mradermacher/Cydonia-v1.2-magnum-v4-22B-i1-GGUF](https://huggingface.co/mradermacher/Cydonia-v1.2-magnum-v4-22B-i1-GGUF) Unfortunately, I deleted the original cooked model and was already on the way to be quanted. I recooked it again from memory for archive, just in case someone needs it. # The Drummer becomes hornier Recipe based on [MarsupialAI/Monstral-123B](https://huggingface.co/MarsupialAI/Monstral-123B). It should work since it's the same Mistral, TheDrummer and MarsupialAI, right? This is a merge of pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [TheDrummer/Cydonia-22B-v1.2](https://huggingface.co/TheDrummer/Cydonia-22B-v1.2) as a base. ### Models Merged The following models were included in the merge: * [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TheDrummer/Cydonia-22B-v1.2 parameters: weight: 0.5 - model: anthracite-org/magnum-v4-22b parameters: weight: 0.5 merge_method: task_arithmetic base_model: TheDrummer/Cydonia-22B-v1.2 dtype: float16 ```
michaelfeil/codegen2-7B-gptj
michaelfeil
2024-10-28T17:07:25Z
14
0
transformers
[ "transformers", "pytorch", "safetensors", "gptj", "text-generation", "fauxpilot", "gpt-j", "float16", "arxiv:2305.02309", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-05-22T21:10:01Z
--- tags: - fauxpilot - gpt-j - float16 license: apache-2.0 --- # Conversion for FauxPilot, Codegen-2 as GPT-J It feels like GPT-J, acts like any other GPT-J, but its Codegen-2 weights under the hood. Converted on 2023-05-22 using ``` python /home/michael/fauxpilot/converter/codegen_gptj_convert.py --code_model Salesforce/codegen2-7B /home/michael/tmp-codegen2-7B-gptj ``` # Licence and other remarks: Licence conditions are intended to be idential to original huggingface repo. # Original description see https://huggingface.co/'Salesforce/codegen2-7B' # CodeGen2 (CodeGen2-16B) ## Model description [CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper: [CodeGen2: Lessons for Training LLMs on Programming and Natural Languages](https://arxiv.org/abs/2305.02309) by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou. Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages. Four model sizes are released: `1B`, `3.7B`, `7B`, `16B`. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality. ### Causal sampling For regular causal sampling, simply generate completions given the context: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-16B") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-16B", trust_remote_code=True, revision="main") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ### Infill sampling For **infill** sampling, we introduce three new special token types: * `<mask_N>`: N-th span to be masked. In practice, use `<mask_1>` to where you want to sample infill. * `<sep>`: Seperator token between the suffix and the infilled sample. See below. * `<eom>`: "End-Of-Mask" token that model will output at the end of infilling. You may use this token to truncate the output. For example, if we want to generate infill for the following cursor position of a function: ```python def hello_world(): | return name ``` we construct an input to the model by 1. Inserting `<mask_1>` token in place of cursor position 2. Append `<sep>` token to indicate the boundary 3. Insert another `<mask_1>` to indicate which mask we want to infill. The final snippet looks as follows: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-16B") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-16B", trust_remote_code=True, revision="main") def format(prefix, suffix): return prefix + "<mask_1>" + suffix + "<|endoftext|>" + "<sep>" + "<mask_1>" prefix = "def hello_world(): " suffix = " return name" text = format(prefix, suffix) input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=False)[len(text):]) ``` You might want to truncate the model output with `<eom>`. ## Training data This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (and frameworks) are as follows: `c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`. ## Training procedure CodeGen2 was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The input sequences are formatted in two ways: (1) causal language modeling and (2) file-level span corruption. Please refer to the paper for more details. ## Evaluation results We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper](https://arxiv.org/abs/2305.02309) for more details. ## Intended use and limitations As an autoregressive language model, CodeGen2 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. ## BibTeX entry and citation info ```bibtex @article{Nijkamp2023codegen2, title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages}, author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo}, journal={arXiv preprint}, year={2023} } ```
MaziyarPanahi/L3-Nymeria-v2-8B-GGUF
MaziyarPanahi
2024-10-28T17:01:46Z
55
0
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:tannedbum/L3-Nymeria-v2-8B", "base_model:quantized:tannedbum/L3-Nymeria-v2-8B", "region:us", "conversational" ]
text-generation
2024-10-28T16:36:27Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: L3-Nymeria-v2-8B-GGUF base_model: tannedbum/L3-Nymeria-v2-8B inference: false model_creator: tannedbum pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/L3-Nymeria-v2-8B-GGUF](https://huggingface.co/MaziyarPanahi/L3-Nymeria-v2-8B-GGUF) - Model creator: [tannedbum](https://huggingface.co/tannedbum) - Original model: [tannedbum/L3-Nymeria-v2-8B](https://huggingface.co/tannedbum/L3-Nymeria-v2-8B) ## Description [MaziyarPanahi/L3-Nymeria-v2-8B-GGUF](https://huggingface.co/MaziyarPanahi/L3-Nymeria-v2-8B-GGUF) contains GGUF format model files for [tannedbum/L3-Nymeria-v2-8B](https://huggingface.co/tannedbum/L3-Nymeria-v2-8B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf
RichardErkhov
2024-10-28T16:57:05Z
6
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-10-28T10:04:37Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama2-22b-chronos-alpaca-experiment1 - GGUF - Model creator: https://huggingface.co/nkpz/ - Original model: https://huggingface.co/nkpz/llama2-22b-chronos-alpaca-experiment1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama2-22b-chronos-alpaca-experiment1.Q2_K.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q2_K.gguf) | Q2_K | 7.56GB | | [llama2-22b-chronos-alpaca-experiment1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q3_K_S.gguf) | Q3_K_S | 8.82GB | | [llama2-22b-chronos-alpaca-experiment1.Q3_K.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q3_K.gguf) | Q3_K | 9.88GB | | [llama2-22b-chronos-alpaca-experiment1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q3_K_M.gguf) | Q3_K_M | 9.88GB | | [llama2-22b-chronos-alpaca-experiment1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q3_K_L.gguf) | Q3_K_L | 10.81GB | | [llama2-22b-chronos-alpaca-experiment1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.IQ4_XS.gguf) | IQ4_XS | 10.95GB | | [llama2-22b-chronos-alpaca-experiment1.Q4_0.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q4_0.gguf) | Q4_0 | 11.49GB | | [llama2-22b-chronos-alpaca-experiment1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.IQ4_NL.gguf) | IQ4_NL | 11.56GB | | [llama2-22b-chronos-alpaca-experiment1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q4_K_S.gguf) | Q4_K_S | 11.58GB | | [llama2-22b-chronos-alpaca-experiment1.Q4_K.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q4_K.gguf) | Q4_K | 12.27GB | | [llama2-22b-chronos-alpaca-experiment1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q4_K_M.gguf) | Q4_K_M | 12.27GB | | [llama2-22b-chronos-alpaca-experiment1.Q4_1.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q4_1.gguf) | Q4_1 | 12.75GB | | [llama2-22b-chronos-alpaca-experiment1.Q5_0.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q5_0.gguf) | Q5_0 | 14.0GB | | [llama2-22b-chronos-alpaca-experiment1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q5_K_S.gguf) | Q5_K_S | 14.0GB | | [llama2-22b-chronos-alpaca-experiment1.Q5_K.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q5_K.gguf) | Q5_K | 14.41GB | | [llama2-22b-chronos-alpaca-experiment1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q5_K_M.gguf) | Q5_K_M | 14.41GB | | [llama2-22b-chronos-alpaca-experiment1.Q5_1.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q5_1.gguf) | Q5_1 | 15.26GB | | [llama2-22b-chronos-alpaca-experiment1.Q6_K.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q6_K.gguf) | Q6_K | 16.68GB | | [llama2-22b-chronos-alpaca-experiment1.Q8_0.gguf](https://huggingface.co/RichardErkhov/nkpz_-_llama2-22b-chronos-alpaca-experiment1-gguf/blob/main/llama2-22b-chronos-alpaca-experiment1.Q8_0.gguf) | Q8_0 | 21.6GB | Original model description: --- license: other --- update: after a lot of fun experiments, I'm doubtful there is a way for this method to really have a positive impact on the outcome without similar resources to those used to train llama in the first place. leaving it up for the sake of mad science, but moving on. not recommended for general use Llama 2 Chronos 13b x Llama 1 Chronos 33b x Alpaca This is a frankenllama model based on the technique in https://huggingface.co/chargoddard/llama2-22b I built my base 22b model by using https://huggingface.co/Oniichat/llama2-base-chronos-13b-merge as a base, and https://huggingface.co/elinas/chronos-33b as a donor. I then trained a qlora on the Alpaca dataset with the default peft configuration from https://github.com/facebookresearch/llama-recipes/blob/main/quickstart.ipynb This is the result of baking in that adapter. This configuration only targets `q_proj` and `v_proj` and uses `r=8`. I was expecting to need to add more targets and increase `r` to get significant improvements, but I was surprised by the quality of its context awareness, and I'm starting to think that maybe a 32mb lora is all it takes to get decent results in 22b. I will keep playing with other peft configurations and see where that gets me next. If anyone wants the chronos 22b base model (requires fine tuning) or the adapter, lmk in community discussions.
HappyAIUser/Test3-16bit
HappyAIUser
2024-10-28T16:44:48Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Llama-3.2-3B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-3B-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T16:35:52Z
--- base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** HappyAIUser - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Aarushhh/yuno-225M-python
Aarushhh
2024-10-28T16:43:47Z
203
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "python", "SLM", "small", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T16:39:22Z
--- library_name: transformers tags: - code - python - SLM - small license: cc-by-nc-sa-4.0 --- # A (very) small code LLM. ## it has been trained on working python code snippets ## it is a base model, it has not been instruction fine tuned. ### MODEL CARD WIP
mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF
mradermacher
2024-10-28T16:39:11Z
44
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "cleverboi", "theprint", "en", "dataset:theprint/CleverBoi", "base_model:theprint/CleverBoi-Mistral-0.3-7B", "base_model:quantized:theprint/CleverBoi-Mistral-0.3-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-28T16:20:48Z
--- base_model: theprint/CleverBoi-Mistral-0.3-7B datasets: - theprint/CleverBoi language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft - cleverboi - theprint --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/theprint/CleverBoi-Mistral-0.3-7B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/CleverBoi-Mistral-0.3-7B-GGUF
mradermacher
2024-10-28T16:39:10Z
23
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "cleverboi", "theprint", "en", "dataset:theprint/CleverBoi", "base_model:theprint/CleverBoi-Mistral-0.3-7B", "base_model:quantized:theprint/CleverBoi-Mistral-0.3-7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-28T13:03:23Z
--- base_model: theprint/CleverBoi-Mistral-0.3-7B datasets: - theprint/CleverBoi language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft - cleverboi - theprint --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/theprint/CleverBoi-Mistral-0.3-7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/CleverBoi-Mistral-0.3-7B-GGUF/resolve/main/CleverBoi-Mistral-0.3-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf
RichardErkhov
2024-10-28T16:35:23Z
44
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-10-28T03:47:35Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) XuanYuan-70B-Chat - GGUF - Model creator: https://huggingface.co/Duxiaoman-DI/ - Original model: https://huggingface.co/Duxiaoman-DI/XuanYuan-70B-Chat/ | Name | Quant method | Size | | ---- | ---- | ---- | | [XuanYuan-70B-Chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/blob/main/XuanYuan-70B-Chat.Q2_K.gguf) | Q2_K | 23.78GB | | [XuanYuan-70B-Chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/blob/main/XuanYuan-70B-Chat.IQ3_XS.gguf) | IQ3_XS | 26.44GB | | [XuanYuan-70B-Chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/blob/main/XuanYuan-70B-Chat.IQ3_S.gguf) | IQ3_S | 27.94GB | | [XuanYuan-70B-Chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/blob/main/XuanYuan-70B-Chat.Q3_K_S.gguf) | Q3_K_S | 27.94GB | | [XuanYuan-70B-Chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/blob/main/XuanYuan-70B-Chat.IQ3_M.gguf) | IQ3_M | 28.89GB | | [XuanYuan-70B-Chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/blob/main/XuanYuan-70B-Chat.Q3_K.gguf) | Q3_K | 31.06GB | | [XuanYuan-70B-Chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/blob/main/XuanYuan-70B-Chat.Q3_K_M.gguf) | Q3_K_M | 31.06GB | | [XuanYuan-70B-Chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/blob/main/XuanYuan-70B-Chat.Q3_K_L.gguf) | Q3_K_L | 33.74GB | | [XuanYuan-70B-Chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/blob/main/XuanYuan-70B-Chat.IQ4_XS.gguf) | IQ4_XS | 34.72GB | | [XuanYuan-70B-Chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/blob/main/XuanYuan-70B-Chat.Q4_0.gguf) | Q4_0 | 36.28GB | | [XuanYuan-70B-Chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/blob/main/XuanYuan-70B-Chat.IQ4_NL.gguf) | IQ4_NL | 36.63GB | | [XuanYuan-70B-Chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/blob/main/XuanYuan-70B-Chat.Q4_K_S.gguf) | Q4_K_S | 36.63GB | | [XuanYuan-70B-Chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/tree/main/) | Q4_K | 38.66GB | | [XuanYuan-70B-Chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/tree/main/) | Q4_K_M | 38.66GB | | [XuanYuan-70B-Chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/tree/main/) | Q4_1 | 40.28GB | | [XuanYuan-70B-Chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/tree/main/) | Q5_0 | 44.29GB | | [XuanYuan-70B-Chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/tree/main/) | Q5_K_S | 44.29GB | | [XuanYuan-70B-Chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/tree/main/) | Q5_K | 45.49GB | | [XuanYuan-70B-Chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/tree/main/) | Q5_K_M | 45.49GB | | [XuanYuan-70B-Chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/tree/main/) | Q5_1 | 48.29GB | | [XuanYuan-70B-Chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/tree/main/) | Q6_K | 52.79GB | | [XuanYuan-70B-Chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/Duxiaoman-DI_-_XuanYuan-70B-Chat-gguf/tree/main/) | Q8_0 | 68.38GB | Original model description: --- license: llama2 --- XuanYuan-70B是基于Llama2-70b模型进行中文增强的一系列金融大模型,包含大量中英文语料增量预训练之后的底座模型以及使用高质量指令数据进行对齐的chat模型。 我们的目标是:大模型通用能力尽可能保持的同时,金融领域能力得到明显提升,服务于金融领域。 目前发布的模型和下载链接如下: | | 基座模型 | Chat模型 | 8-bit量化Chat模型 | 4-bit量化Chat模型 | | --------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | | XuanYuan-70B-8k | 🤗 [XuanYuan-70B](https://huggingface.co/Duxiaoman-DI/XuanYuan-70B) | 🤗 [XuanYuan-70B-Chat](https://huggingface.co/Duxiaoman-DI/XuanYuan-70B-Chat) | 🤗 [XuanYuan-70B-Chat-8bit](https://huggingface.co/Duxiaoman-DI/XuanYuan-70B-Chat-8bit ) | 🤗 [XuanYuan-70B-Chat-4bit](https://huggingface.co/Duxiaoman-DI/XuanYuan-70B-Chat-4bit) | # 模型介绍 考虑到金融场景下存在非常多长文本的业务,基于我们高效的分布式训练框架,我们将模型的上下文长度在预训练阶段从4k扩充到了8k和16k,据我们所知,这也是首个在70B参数量级上达到8k及以上上下文长度的开源大模型。 具体细节参考:[XuanYuan-70B](https://github.com/Duxiaoman-DI/XuanYuan) ## 基座模型预训练 (1)**数据质量** - 我们设计了一套数据清洗流水线,精心准备了各类通用数据(互联网网页、百科、论坛、社交媒体、问答等)以及金融相关数据(金融资讯、公司公告、金融百科、金融书籍、证书试题等)高质量数据 - 中英数据:首先llama2的英文能力足够优秀,所以为了保证英文能力不降,我们扩充词表之后,使用高质量的中英语料进行增量预训练,其中中英配比为3:1; - 通用金融数据:为了提升模型在金融能力上效果,预训练过程中通用语料与金融预料比例为9:1,且随着训练进行,逐步提升金融语料的占比。 (2)**模型训练** - 训练效率:我们采取了一系列的加速优化策略, 包括对底层数据加载和分布式训练框架的多处优化,使用flash attention2替代self-attention模块,使用基于CPP CUDA的Fused算子替代原始llama的python实现等 - 上下文长度:基于上述的优化方式,同时考虑到金融场景长上下文情景较多,我们能够在预训练阶段把llama2原始上下文4k的长度扩展到8k和16k; 我们在100台8卡A800(80G)的GPU集群中,训练情况如下: | 模型 | 上下文长度 | 吞吐量 | 显卡利用 | | ------------ | ---------- | ---------------- | -------- | | XuanYuan-70B | 8192 | 340 tokens/s/gpu | 190TFOPS | 备注:(1)训练没有开梯度累计;(2)原始llama2-70b在4k上下文长度下的的吞吐量为323 tokens/s/gpu,说明我们的训练效率达到当前领先水平。 ## Chat模型指令微调 基于上述的XuanYuan-70B基座模型,我们进行了详细的指令微调,基座使模型具备对话和遵循人类指令的能力。 我们采取了两阶段的指令微调,具体来说: - 第一阶段:使用开源的大量的指令数据对基座模型来进行训练,这一部分我们收集了约10M条开源的多语种指令微调数据,并行清洗与深度过滤。这一阶段的目的是为了覆盖指令的多样性,提升模型指令遵循能力。 - 第二阶段:使用自研的高质量的指令数据来继续进行指令微调训练。这一阶段,我们精心自研约20万条通用+金融的指令微调数据,其中大部分数据均做了校验、改写来保证质量。 这一阶段是能够更加使得模型根据不同的需求和侧重来进行最后的训练。 我们自研的指令微调数据预期模型能够在通用对话能力保留的同时,更加侧重金融领域的问答。具体来说,通用指令数据分为以下几个大类:常识百科、代码编程、逻辑推理、数学计算、创意生成、安全无害、摘要提取、翻译等。其中每一大类下又设计了多个子类,来尽可能保证指令数据的多样性和丰富度。 对于金融领域的指令数据,我们进行了更加详细的子类划分,来覆盖金融经济的各个领域。在训练阶段,我们采取的配比为:通用指令数据与金融指令数据配比为4:1。 在训练过程中,我们同样保持8k的上下文长度,未采取外推的措施来提升上下文。后续我们将继续在预训练阶段来提升上下文长度。 同时训练数据中的question-answer pair,我们仅对answer部分计算损失。 # 快速使用 基座模型、Chat模型以及8-bit和4bit量化Chat模型均已发布在Hugging Face。下面我们给出基座模型和Chat模型的推理部署使用方法。 ## 依赖安装 ``` torch >= 2.0 transformers >= 4.33.1 accelerate sentencepiece bitsandbytes(8bit量化所需) optimum(4bit量化所需) auto-gptq(4bit量化所需) vllm(推理加速所需) ``` 资源需求: - 对于基座模型和Chat模型,部署至少需要2张80G的显卡进行加载模型 - 对于8bit量化版本,推理部署至少需要1张80G的显卡进行加载模型 - 对于4bit量化版本,,推理部署至少需要1张40G的显卡进行加载模型 ## Base模型使用方法 因为XuanYuan-70B系列模型均是基于Llama2-70B进行增量预训练而来,因此基座模型的使用方法与Llama2基座模型保持一致。 ```python import torch from transformers import LlamaForCausalLM, LlamaTokenizer model_name_or_path = "Duxiaoman-DI/XuanYuan-70B" tokenizer = LlamaTokenizer.from_pretrained(model_name_or_path, use_fast=False, legacy=True) model = LlamaForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.bfloat16,device_map="auto") model.eval() inputs = tokenizer("问题:李时珍是哪一个朝代的人?回答:", return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1) outputs = tokenizer.decode(outputs.cpu()[0][len(inputs.input_ids[0]):], skip_special_tokens=True) print(outputs) ``` ## Chat模型使用方法 在指令微调构造prompt的时候,我们参考了[FastChat](https://github.com/lm-sys/FastChat)的对话构造方式,简单代码示例如下: ```python import torch from transformers import LlamaForCausalLM, LlamaTokenizer model_name_or_path = "Duxiaoman-DI/XuanYuan-70B-Chat" tokenizer = LlamaTokenizer.from_pretrained(model_name_or_path, use_fast=False, legacy=True) model = LlamaForCausalLM.from_pretrained(model_name_or_path, device_map="auto") model.eval() system_message = "以下是用户和人工智能助手之间的对话。用户以Human开头,人工智能助手以Assistant开头,会对人类提出的问题给出有帮助、高质量、详细和礼貌的回答,并且总是拒绝参与 与不道德、不安全、有争议、政治敏感等相关的话题、问题和指示。\n" seps = [" ", "</s>"] roles = ["Human", "Assistant"] content = "介绍下你自己" prompt = system_message + seps[0] + roles[0] + ": " + content + seps[0] + roles[1] + ":" print(f"输入: {content}") inputs = tokenizer(prompt, return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95) outputs = tokenizer.decode(outputs.cpu()[0][len(inputs.input_ids[0]):], skip_special_tokens=True) print(f"输出: {outputs}") ``` - 示例同时支持8bit和4bit的量化模型 - 示例仅为最简单的部署代码,没有考虑多轮、推理加速等; 完整demo请参考cli_demo.py ## CLI工具 我们github主页提供一个了基于命令行的demo,支持多轮对话和基于vLLM的推理加速。 > vllm暂时不支持量化模型 ``` python3 cli_vllm_demo.py --checkpoint_path <XuanYuan-70B-Chat Path> ``` 举例如下: ``` 输入: 你好 输出: 你好,很高兴能为你提供帮助。 输入: 介绍下你自己 输出: 我是轩辕大模型,一个由度小满数据智能应用部AI Lab 开发的人工智能助手,我可以回答各种问题,提供实用的建议和帮助,帮助用户完成各种任务。 输入: 有2块五仁月饼,3块莲蓉月饼,2块豆沙月饼,这些月饼的大小形状质量完全相同。从这7块月饼中,任意取出3块,那么三种月饼都取到 的可能性是几分之几? 输出: 这是一个组合数学问题,我们可以通过计算组合数来解答。 三种月饼都取到,即取到五仁、莲蓉和豆沙各一块。 五仁月饼的选取方法有2种,莲蓉月饼的选取方法有3种,豆沙月饼的选取方法有2种,所以总的取出一种五仁、一种莲蓉、一种豆沙的方法有2*3*2=12种。 从7块月饼中任意取出3块月饼的总的组合数为C(7,3)=35种。 所以,从这7块月饼中,任意取出3块,三种月饼都取到 的可能性为12/35。 ``` ## 量化部署 为了降低用户在本地使用XuanYuan的成本,降低显存需求,我们提供量化好的Xuanyuan-70B-Chat模型8bit和4bit模型。 **8bit离线量化模型** 在8bit量化算法上,我们使用目前社区广泛使用的[bitsandbytes](https://github.com/TimDettmers/bitsandbytes)库。该库包含LLM.int8()量化算法的实现以及一系列量化的工具, 同时该方法已在transformers库里做了集成,使用较为容易。经过我们的测试,8bit量化可以近乎无损。 **4bit离线量化模型** 在4bit量化算法上,我们使用[auto-gptq](https://github.com/PanQiWei/AutoGPTQ)工具。该库实现的GPTQ算法是目前4bit量化最受欢迎的方法, 同时该方法在transformers库和optimum库里做了集成,使用较为容易。 下表给出了不同模型所需显存,以及在三个评测基准上CEVAL,CMMLU和MMLU上效果: | 模型 | 显存 | CEVAL | CMMLU | MMLU | | ---------------------- | ---- | ----- | ----- | ---- | | XuanYuan-70B-Chat | 129G | 62.15 | 60.41 | 65.3 | | XuanYuan-70B-Chat-8bit | 65G | 62.25 | 59.99 | 65.0 | | XuanYuan-70B-Chat-4bit | 35G | 60.94 | 58.76 | 63.0 | 可以看出: - 8bit和4bit的量化模型相比原始float16的模型,空间分别降低为原来的1/2和1/4。能够显著降低硬件需求。 - 8bit的量化模型相原始float16的模型,效果近乎无损,4bit的量化模型,大概下降2个点左右。 - 此外,我们也对量化版本的Chat模型进行对话人工评测,结论与评测基准类似。 使用量化模请参考上面的Chat模型使用方法的示例代码。
rahulvk007/dockerllama
rahulvk007
2024-10-28T16:30:44Z
132
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dockerllama", "docker", "nlp", "command-generation", "conversational", "en", "dataset:MattCoddity/dockerNLcommands", "base_model:meta-llama/Llama-3.2-1B", "base_model:finetune:meta-llama/Llama-3.2-1B", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T09:20:43Z
--- library_name: transformers tags: - dockerllama - docker - nlp - command-generation license: llama3.2 datasets: - MattCoddity/dockerNLcommands language: - en base_model: - meta-llama/Llama-3.2-1B --- # Model Card for DockerLlama DockerLlama is a Transformers model designed to interpret natural language queries and generate Docker commands. This model facilitates quick and easy command generation for Docker operations, making it ideal for users who want to interact with Docker without memorizing command syntax. ## Model Details ### Model Description DockerLlama, developed as a command-generation model, translates user requests into precise Docker commands. It supports use cases like querying the health of containers, creating networks, and managing Docker resources. DockerLlama is particularly useful for DevOps engineers, software developers, and IT professionals working with containerized applications. - **Developed by:** [rahulvk007](https://www.rahulvk.com) - **Model type:** Language Model fine-tuned for Docker command generation - **Language(s):** English (NLP for Docker commands) - **License:** llama3.2 - **Finetuned from model:** meta-llama/Llama-3.2-1B ### Model Sources - **Repository:** [DockerLlama on Hugging Face Hub](https://huggingface.co/rahulvk007/dockerllama) - **Dataset:** [MattCoddity/dockerNLcommands](https://huggingface.co/datasets/MattCoddity/dockerNLcommands) ## Uses ### Direct Use DockerLlama is used directly to translate natural language queries into Docker commands. For example, "Give me a list of running containers that are healthy" would be translated into ```docker ps --filter 'status=running' --filter 'health=healthy'``` command. ### Out-of-Scope Use The model is not suited for general natural language tasks unrelated to Docker or for use cases outside of Docker command generation. ## Bias, Risks, and Limitations DockerLlama is focused on Docker commands, so its performance on unrelated queries or commands not supported by Docker may produce incorrect or irrelevant responses. ### Recommendations Users should verify the generated Docker commands before executing them to avoid unintended effects on their Docker environment. ## How to Get Started with the Model To deploy the model locally, you can use VLLM. Here are some commands: **Command to deploy with VLLM:** ```bash docker run --runtime nvidia --gpus all -p 9000:8000 --ipc=host vllm/vllm-openai:latest --model rahulvk007/dockerllama ``` If you have a low-memory machine with an older GPU (like GTX 1650), try this: ```bash docker run --gpus all -p 9000:8000 --ipc=host vllm/vllm-openai:latest --model rahulvk007/dockerllama --dtype=half --max-model-len=512 ``` ### Important Prompt Setup Use the following system prompt to ensure the model translates queries accurately: ``` translate this sentence in docker command ``` **Example Request:** To interact with the deployed model, make a POST request to `http://localhost:9000/v1/chat/completions` (change the endpoint to your deployment url) with the following payload: ```json { "model": "rahulvk007/dockerllama", "messages": [ {"role": "system", "content": "translate this sentence in docker command"}, {"role": "user", "content": "Give me a list of running containers that are healthy."} ] } ``` ## Training Details ### Training Data The model was fine-tuned using the dataset [MattCoddity/dockerNLcommands](https://huggingface.co/datasets/MattCoddity/dockerNLcommands), which includes natural language commands and their Docker command equivalents.
rewicks/baseline_en-de_64k_ep34
rewicks
2024-10-28T16:24:50Z
117
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-28T16:23:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qgallouedec/Qwen2-0.5B-OnlineDPO-AutoRM
qgallouedec
2024-10-28T16:22:12Z
132
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "online-dpo", "conversational", "dataset:trl-lib/ultrafeedback-prompt", "arxiv:2402.04792", "base_model:Qwen/Qwen2-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T14:52:57Z
--- base_model: Qwen/Qwen2-0.5B-Instruct datasets: trl-lib/ultrafeedback-prompt library_name: transformers model_name: Qwen2-0.5B-OnlineDPO-AutoRM tags: - generated_from_trainer - trl - online-dpo licence: license --- # Model Card for Qwen2-0.5B-OnlineDPO-AutoRM This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [trl-lib/ultrafeedback-prompt](https://huggingface.co/datasets/trl-lib/ultrafeedback-prompt) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="qgallouedec/Qwen2-0.5B-OnlineDPO-AutoRM", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/hnzo995f) This model was trained with Online DPO, a method introduced in [Direct Language Model Alignment from Online AI Feedback](https://huggingface.co/papers/2402.04792). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0.dev0 - Pytorch: 2.4.0 - Datasets: 3.0.2 - Tokenizers: 0.20.0 ## Citations Cite Online DPO as: ```bibtex @article{guo2024direct, title = {{Direct Language Model Alignment from Online AI Feedback}}, author = {Shangmin Guo and Biao Zhang and Tianlin Liu and Tianqi Liu and Misha Khalman and Felipe Llinares and Alexandre Ram{'{e}} and Thomas Mesnard and Yao Zhao and Bilal Piot and Johan Ferret and Mathieu Blondel}, year = 2024, eprint = {arXiv:2402.04792} } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rewicks/baseline_en-de_64k_ep33
rewicks
2024-10-28T16:21:56Z
121
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-28T16:20:21Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mitra-mir/finetuned_t5-1-epoch-4_batches
mitra-mir
2024-10-28T16:20:38Z
114
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-28T16:19:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf
RichardErkhov
2024-10-28T16:19:29Z
21
0
null
[ "gguf", "arxiv:2312.12450", "endpoints_compatible", "region:us" ]
null
2024-10-28T13:42:11Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) EditCoder-6.7b-v1 - GGUF - Model creator: https://huggingface.co/nuprl/ - Original model: https://huggingface.co/nuprl/EditCoder-6.7b-v1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [EditCoder-6.7b-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q2_K.gguf) | Q2_K | 2.36GB | | [EditCoder-6.7b-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [EditCoder-6.7b-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q3_K.gguf) | Q3_K | 3.07GB | | [EditCoder-6.7b-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [EditCoder-6.7b-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [EditCoder-6.7b-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [EditCoder-6.7b-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q4_0.gguf) | Q4_0 | 3.56GB | | [EditCoder-6.7b-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.IQ4_NL.gguf) | IQ4_NL | 3.59GB | | [EditCoder-6.7b-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [EditCoder-6.7b-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q4_K.gguf) | Q4_K | 3.8GB | | [EditCoder-6.7b-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [EditCoder-6.7b-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q4_1.gguf) | Q4_1 | 3.95GB | | [EditCoder-6.7b-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q5_0.gguf) | Q5_0 | 4.33GB | | [EditCoder-6.7b-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [EditCoder-6.7b-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q5_K.gguf) | Q5_K | 4.46GB | | [EditCoder-6.7b-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q5_K_M.gguf) | Q5_K_M | 4.46GB | | [EditCoder-6.7b-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q5_1.gguf) | Q5_1 | 4.72GB | | [EditCoder-6.7b-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q6_K.gguf) | Q6_K | 5.15GB | | [EditCoder-6.7b-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/nuprl_-_EditCoder-6.7b-v1-gguf/blob/main/EditCoder-6.7b-v1.Q8_0.gguf) | Q8_0 | 6.67GB | Original model description: --- language: - code datasets: - nuprl/EditPackFT library_name: transformers pipeline_tag: text2text-generation tags: - code model-index: - name: EditCoder-6.7b-v1 results: - task: type: text-generation dataset: type: nuprl/CanItEdit name: CanItEdit Descriptive metrics: - name: pass@1 type: pass@1 value: 0.4815 verified: false - task: type: text-generation dataset: type: nuprl/CanItEdit name: CanItEdit Lazy metrics: - name: pass@1 type: pass@1 value: 0.3696 verified: false --- EditCoder-6.7b (version 1) is a fine-tuned version of [DeepSeek Coder](deepseek-ai/deepseek-coder-6.7b-base) (base model, 6.7b parameters) for instructional code editing. We utilize [EditPackFT](https://huggingface.co/datasets/nuprl/EditPackFT) as our fine-tuning dataset, and we show state-of-the-art performance among non-distilled open source models for code editing, using the [CanItEdit](https://huggingface.co/datasets/nuprl/CanItEdit) benchmark. More information can be found on [our paper](https://arxiv.org/abs/2312.12450). **NOTE: This is the model trained on EditPackFT, not Commits2023FT. We are working on releasing that one soon.** ## Citation If you use our work, please cite our paper as such: ``` @inproceedings{cassano2023edit, title={{Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions}}, author={Federico Cassano and Luisa Li and Akul Sethi and Noah Shinn and Abby Brennan-Jones and Anton Lozhkov and Carolyn Jane Anderson and Arjun Guha}, booktitle={The First International Workshop on Large Language Model for Code}, year={2024}, url={https://arxiv.org/abs/2312.12450} } ``` # Prompt The model has been trained on the following prompt format: ``` ## Code Before: {before} ## Instruction: {instruction} ## Code After: {after} ``` Here is a python function that can be used for formatting the prompt correctly: ```py def edit_prompt(old, instr): before = f"""## Code Before:\n{old}\n""" instr = f"""## Instruction:\n{instr}\n""" after = f"""## Code After:\n""" return before + instr + after ``` # Train Your Own EditCoder We provide the full pipeline that was used for training our own edit-coder model. The pipeline and instructions can be found on our [GitHub repository](https://github.com/nuprl/CanItEdit/tree/main/editcoder).
rewicks/baseline_en-de_64k_ep32
rewicks
2024-10-28T16:19:13Z
114
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-28T16:17:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Saideepthi55/st_mpnet_base20
Saideepthi55
2024-10-28T16:16:12Z
106
0
transformers
[ "transformers", "safetensors", "mpnet", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T16:15:55Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zixianma/mantis_415k-bsline-toolboth-seq_len_8192-lr_1e-5-gl_bs_128-ep_1
zixianma
2024-10-28T16:13:59Z
5
0
null
[ "safetensors", "llava", "generated_from_trainer", "base_model:TIGER-Lab/Mantis-8B-siglip-llama3-pretraind", "base_model:finetune:TIGER-Lab/Mantis-8B-siglip-llama3-pretraind", "license:llama3", "region:us" ]
null
2024-10-28T03:53:37Z
--- license: llama3 base_model: TIGER-Lab/Mantis-8B-siglip-llama3-pretraind tags: - generated_from_trainer model-index: - name: mantis_415k-bsline-toolboth-seq_len_8192-lr_1e-5-gl_bs_128-ep_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://salesforceairesearch.wandb.io/jianguozhang/Mantis/runs/smjz2s6y) # mantis_415k-bsline-toolboth-seq_len_8192-lr_1e-5-gl_bs_128-ep_1 This model is a fine-tuned version of [TIGER-Lab/Mantis-8B-siglip-llama3-pretraind](https://huggingface.co/TIGER-Lab/Mantis-8B-siglip-llama3-pretraind) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.43.0 - Pytorch 2.4.0+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
linoyts/yarn-art-30-37-32
linoyts
2024-10-28T16:12:59Z
9
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "sd3.5-large", "sd3.5", "sd3.5-diffusers", "base_model:stabilityai/stable-diffusion-3.5-large", "base_model:adapter:stabilityai/stable-diffusion-3.5-large", "license:other", "region:us" ]
text-to-image
2024-10-28T16:00:00Z
--- base_model: stabilityai/stable-diffusion-3.5-large library_name: diffusers license: other tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - sd3.5-large - sd3.5 - sd3.5-diffusers instance_prompt: Frog, yarn art style widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SD3.5-Large DreamBooth LoRA - linoyts/yarn-art-30-37-32 <Gallery /> ## Model description These are linoyts/yarn-art-30-37-32 DreamBooth LoRA weights for stabilityai/stable-diffusion-3.5-large. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `Frog, yarn art style` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](linoyts/yarn-art-30-37-32/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3.5-large, torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('linoyts/yarn-art-30-37-32', weight_name='pytorch_lora_weights.safetensors') image = pipeline('Frog, yarn art style').images[0] ``` ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/linoyts/yarn-art-30-37-32/blob/main/diffusers_lora_weights.safetensors)**. - Rename it and place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
rohand8/lora_model_effort2
rohand8
2024-10-28T16:12:59Z
76
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "base_model:quantized:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-10-22T18:22:17Z
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** rohand8 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
lesubra/ECE-PRYMMAL-3B-SLERP-V2
lesubra
2024-10-28T16:12:50Z
8
0
null
[ "safetensors", "phi3", "merge", "mergekit", "lazymergekit", "jpacifico/Chocolatine-3B-Instruct-DPO-Revised", "microsoft/Phi-3.5-mini-instruct", "custom_code", "license:apache-2.0", "region:us" ]
null
2024-10-28T16:10:20Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - jpacifico/Chocolatine-3B-Instruct-DPO-Revised - microsoft/Phi-3.5-mini-instruct --- # ECE-PRYMMAL-3B-SLERP-V2 ECE-PRYMMAL-3B-SLERP-V2 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [jpacifico/Chocolatine-3B-Instruct-DPO-Revised](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-Revised) * [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) ## 🧩 Configuration ```yaml slices: - sources: - model: jpacifico/Chocolatine-3B-Instruct-DPO-Revised layer_range: [0, 32] - model: microsoft/Phi-3.5-mini-instruct layer_range: [0, 32] merge_method: slerp base_model: jpacifico/Chocolatine-3B-Instruct-DPO-Revised parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
olabs-ai/qLeap_v07_instruct
olabs-ai
2024-10-28T16:11:27Z
7
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/Llama-3.2-1B-bnb-4bit", "base_model:quantized:unsloth/Llama-3.2-1B-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-10-28T16:07:52Z
--- base_model: unsloth/Llama-3.2-1B-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** olabs-ai - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rewicks/baseline_en-de_64k_ep30
rewicks
2024-10-28T16:10:23Z
114
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-10-28T16:05:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
martin-gorner/gemma_pirate_instruct_7b-keras
martin-gorner
2024-10-28T16:05:51Z
3
0
keras-hub
[ "keras-hub", "text-generation", "region:us" ]
text-generation
2024-04-30T16:24:48Z
--- library_name: keras-hub pipeline_tag: text-generation --- Gemma fine-tuned to speak like a pirate. This is a [`Gemma` model](https://keras.io/api/keras_nlp/models/gemma) uploaded using the KerasNLP library and can be used with JAX, TensorFlow, and PyTorch backends. This model is related to a `CausalLM` task. Model config: * **name:** gemma_backbone * **trainable:** True * **vocabulary_size:** 256000 * **num_layers:** 28 * **num_query_heads:** 16 * **num_key_value_heads:** 16 * **hidden_dim:** 3072 * **intermediate_dim:** 49152 * **head_dim:** 256 * **layer_norm_epsilon:** 1e-06 * **dropout:** 0 This model card has been generated automatically and should be completed by the model author. See [Model Cards documentation](https://huggingface.co/docs/hub/model-cards) for more information.
a-scarlett/results
a-scarlett
2024-10-28T16:01:34Z
105
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-22T14:45:04Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - f1 - precision - recall - accuracy model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0107 - Model Preparation Time: 0.0007 - F1: 0.9970 - Precision: 0.9971 - Recall: 0.9970 - Accuracy: 0.9970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | F1 | Precision | Recall | Accuracy | |:-------------:|:------:|:----:|:---------------:|:----------------------:|:------:|:---------:|:------:|:--------:| | 0.0166 | 0.3367 | 100 | 0.0102 | 0.0007 | 0.9979 | 0.9979 | 0.9979 | 0.9979 | | 0.0022 | 0.6734 | 200 | 0.0052 | 0.0007 | 0.9983 | 0.9983 | 0.9983 | 0.9983 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1 - Datasets 3.0.1 - Tokenizers 0.20.1
linoyts/yarn-art-30-37
linoyts
2024-10-28T15:59:14Z
6
0
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "sd3.5-large", "sd3.5", "sd3.5-diffusers", "base_model:stabilityai/stable-diffusion-3.5-large", "base_model:adapter:stabilityai/stable-diffusion-3.5-large", "license:other", "region:us" ]
text-to-image
2024-10-28T15:48:55Z
--- base_model: stabilityai/stable-diffusion-3.5-large library_name: diffusers license: other tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - sd3.5-large - sd3.5 - sd3.5-diffusers instance_prompt: Frog, yarn art style widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SD3.5-Large DreamBooth LoRA - linoyts/yarn-art-30-37 <Gallery /> ## Model description These are linoyts/yarn-art-30-37 DreamBooth LoRA weights for stabilityai/stable-diffusion-3.5-large. The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md). Was LoRA for the text encoder enabled? False. ## Trigger words You should use `Frog, yarn art style` to trigger the image generation. ## Download model [Download the *.safetensors LoRA](linoyts/yarn-art-30-37/tree/main) in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3.5-large, torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('linoyts/yarn-art-30-37', weight_name='pytorch_lora_weights.safetensors') image = pipeline('Frog, yarn art style').images[0] ``` ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/linoyts/yarn-art-30-37/blob/main/diffusers_lora_weights.safetensors)**. - Rename it and place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## License Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3.5-large/blob/main/LICENSE.md). ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
lesubra/ECE-PRYMMAL-3B-SLERP-V1
lesubra
2024-10-28T15:57:54Z
8
0
null
[ "safetensors", "phi3", "merge", "mergekit", "lazymergekit", "jpacifico/Chocolatine-3B-Instruct-DPO-Revised", "microsoft/Phi-3.5-mini-instruct", "custom_code", "license:apache-2.0", "region:us" ]
null
2024-10-28T15:55:25Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - jpacifico/Chocolatine-3B-Instruct-DPO-Revised - microsoft/Phi-3.5-mini-instruct --- # ECE-PRYMMAL-3B-SLERP-V1 ECE-PRYMMAL-3B-SLERP-V1 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [jpacifico/Chocolatine-3B-Instruct-DPO-Revised](https://huggingface.co/jpacifico/Chocolatine-3B-Instruct-DPO-Revised) * [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) ## 🧩 Configuration ```yaml slices: - sources: - model: jpacifico/Chocolatine-3B-Instruct-DPO-Revised layer_range: [0, 32] - model: microsoft/Phi-3.5-mini-instruct layer_range: [0, 32] merge_method: slerp base_model: jpacifico/Chocolatine-3B-Instruct-DPO-Revised parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
g-assismoraes/deberta-semeval25_EN08_CC_fold3
g-assismoraes
2024-10-28T15:55:54Z
162
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T15:54:02Z
--- library_name: transformers license: mit base_model: microsoft/deberta-v3-base tags: - generated_from_trainer model-index: - name: deberta-semeval25_EN08_CC_fold3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-semeval25_EN08_CC_fold3 This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.7711 - Precision Samples: 0.3317 - Recall Samples: 0.6019 - F1 Samples: 0.3501 - Precision Macro: 0.8800 - Recall Macro: 0.4443 - F1 Macro: 0.3645 - Precision Micro: 0.3023 - Recall Micro: 0.5 - F1 Micro: 0.3768 - Precision Weighted: 0.6710 - Recall Weighted: 0.5 - F1 Weighted: 0.2831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | 8.7329 | 1.0 | 15 | 9.1899 | 1.0 | 0.0 | 0.0 | 1.0 | 0.2927 | 0.2927 | 1.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | | 8.7557 | 2.0 | 30 | 8.7023 | 0.2944 | 0.4422 | 0.3232 | 0.9506 | 0.3618 | 0.3278 | 0.3291 | 0.3333 | 0.3312 | 0.7740 | 0.3333 | 0.1757 | | 7.7804 | 3.0 | 45 | 8.4060 | 0.2528 | 0.495 | 0.3084 | 0.9279 | 0.3780 | 0.3314 | 0.2736 | 0.3718 | 0.3152 | 0.7236 | 0.3718 | 0.1808 | | 8.1166 | 4.0 | 60 | 8.1882 | 0.4028 | 0.4867 | 0.2957 | 0.9299 | 0.3740 | 0.3329 | 0.2857 | 0.3590 | 0.3182 | 0.7378 | 0.3590 | 0.1933 | | 7.5019 | 5.0 | 75 | 8.0913 | 0.4317 | 0.4933 | 0.2930 | 0.9146 | 0.3862 | 0.3436 | 0.2929 | 0.3718 | 0.3277 | 0.7260 | 0.3718 | 0.2087 | | 7.2052 | 6.0 | 90 | 7.9759 | 0.4183 | 0.5287 | 0.3090 | 0.8878 | 0.4114 | 0.3618 | 0.3056 | 0.4231 | 0.3548 | 0.6770 | 0.4231 | 0.2548 | | 7.4814 | 7.0 | 105 | 7.8667 | 0.4189 | 0.5935 | 0.3361 | 0.8806 | 0.4402 | 0.3644 | 0.3016 | 0.4872 | 0.3725 | 0.6676 | 0.4872 | 0.2780 | | 7.0536 | 8.0 | 120 | 7.8577 | 0.3072 | 0.6019 | 0.3371 | 0.8780 | 0.4443 | 0.3628 | 0.2889 | 0.5 | 0.3662 | 0.6571 | 0.5 | 0.2708 | | 7.5216 | 9.0 | 135 | 7.7850 | 0.3339 | 0.6019 | 0.3524 | 0.8804 | 0.4443 | 0.3651 | 0.3047 | 0.5 | 0.3786 | 0.6720 | 0.5 | 0.2844 | | 7.5535 | 10.0 | 150 | 7.7711 | 0.3317 | 0.6019 | 0.3501 | 0.8800 | 0.4443 | 0.3645 | 0.3023 | 0.5 | 0.3768 | 0.6710 | 0.5 | 0.2831 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.20.1
JanNafta/10liomess
JanNafta
2024-10-28T15:50:57Z
29
1
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-10-28T15:05:07Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: 10liomess --- # 10Liomess <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `10liomess` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('JanNafta/10liomess', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf
RichardErkhov
2024-10-28T15:46:35Z
116
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-28T12:44:27Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama3.1_6.5b_mergkit_prunme - GGUF - Model creator: https://huggingface.co/thucdangvan020999/ - Original model: https://huggingface.co/thucdangvan020999/llama3.1_6.5b_mergkit_prunme/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama3.1_6.5b_mergkit_prunme.Q2_K.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q2_K.gguf) | Q2_K | 2.44GB | | [llama3.1_6.5b_mergkit_prunme.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q3_K_S.gguf) | Q3_K_S | 2.8GB | | [llama3.1_6.5b_mergkit_prunme.Q3_K.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q3_K.gguf) | Q3_K | 3.06GB | | [llama3.1_6.5b_mergkit_prunme.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q3_K_M.gguf) | Q3_K_M | 3.06GB | | [llama3.1_6.5b_mergkit_prunme.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q3_K_L.gguf) | Q3_K_L | 3.28GB | | [llama3.1_6.5b_mergkit_prunme.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.IQ4_XS.gguf) | IQ4_XS | 3.41GB | | [llama3.1_6.5b_mergkit_prunme.Q4_0.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q4_0.gguf) | Q4_0 | 3.54GB | | [llama3.1_6.5b_mergkit_prunme.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.IQ4_NL.gguf) | IQ4_NL | 3.57GB | | [llama3.1_6.5b_mergkit_prunme.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q4_K_S.gguf) | Q4_K_S | 3.56GB | | [llama3.1_6.5b_mergkit_prunme.Q4_K.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q4_K.gguf) | Q4_K | 3.74GB | | [llama3.1_6.5b_mergkit_prunme.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q4_K_M.gguf) | Q4_K_M | 3.74GB | | [llama3.1_6.5b_mergkit_prunme.Q4_1.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q4_1.gguf) | Q4_1 | 3.89GB | | [llama3.1_6.5b_mergkit_prunme.Q5_0.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q5_0.gguf) | Q5_0 | 4.24GB | | [llama3.1_6.5b_mergkit_prunme.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q5_K_S.gguf) | Q5_K_S | 4.24GB | | [llama3.1_6.5b_mergkit_prunme.Q5_K.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q5_K.gguf) | Q5_K | 4.34GB | | [llama3.1_6.5b_mergkit_prunme.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q5_K_M.gguf) | Q5_K_M | 4.34GB | | [llama3.1_6.5b_mergkit_prunme.Q5_1.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q5_1.gguf) | Q5_1 | 4.58GB | | [llama3.1_6.5b_mergkit_prunme.Q6_K.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q6_K.gguf) | Q6_K | 4.98GB | | [llama3.1_6.5b_mergkit_prunme.Q8_0.gguf](https://huggingface.co/RichardErkhov/thucdangvan020999_-_llama3.1_6.5b_mergkit_prunme-gguf/blob/main/llama3.1_6.5b_mergkit_prunme.Q8_0.gguf) | Q8_0 | 6.44GB | Original model description: --- base_model: - meta-llama/Meta-Llama-3-8B-Instruct library_name: transformers tags: - mergekit - merge --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: bfloat16 merge_method: passthrough slices: - sources: - layer_range: [0, 22] model: meta-llama/Meta-Llama-3-8B-Instruct - sources: - layer_range: [29, 32] model: meta-llama/Meta-Llama-3-8B-Instruct ```
Kayabuki4/letese
Kayabuki4
2024-10-28T15:38:09Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T14:09:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
g-assismoraes/deberta-semeval25_EN08_WAR_fold5
g-assismoraes
2024-10-28T15:32:02Z
163
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T15:28:27Z
--- library_name: transformers license: mit base_model: microsoft/deberta-v3-base tags: - generated_from_trainer model-index: - name: deberta-semeval25_EN08_WAR_fold5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-semeval25_EN08_WAR_fold5 This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 10.3208 - Precision Samples: 0.2124 - Recall Samples: 0.4636 - F1 Samples: 0.2634 - Precision Macro: 0.6349 - Recall Macro: 0.3430 - F1 Macro: 0.2211 - Precision Micro: 0.2049 - Recall Micro: 0.4258 - F1 Micro: 0.2766 - Precision Weighted: 0.4312 - Recall Weighted: 0.4258 - F1 Weighted: 0.2159 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | 10.8716 | 1.0 | 43 | 11.1010 | 0.1764 | 0.1324 | 0.1418 | 0.9676 | 0.1392 | 0.1131 | 0.1792 | 0.1211 | 0.1445 | 0.9047 | 0.1211 | 0.0437 | | 9.3457 | 2.0 | 86 | 10.9271 | 0.2529 | 0.2659 | 0.1663 | 0.8864 | 0.1928 | 0.1307 | 0.1531 | 0.2422 | 0.1876 | 0.7249 | 0.2422 | 0.0879 | | 9.2504 | 3.0 | 129 | 10.8593 | 0.1657 | 0.2917 | 0.1887 | 0.8654 | 0.2050 | 0.1341 | 0.1554 | 0.2695 | 0.1971 | 0.7079 | 0.2695 | 0.0907 | | 8.6182 | 4.0 | 172 | 10.7548 | 0.1653 | 0.3665 | 0.2061 | 0.8101 | 0.2438 | 0.1538 | 0.1616 | 0.3320 | 0.2174 | 0.6140 | 0.3320 | 0.1218 | | 10.3126 | 5.0 | 215 | 10.6097 | 0.1839 | 0.3906 | 0.2268 | 0.7668 | 0.2733 | 0.1745 | 0.1742 | 0.3633 | 0.2354 | 0.5543 | 0.3633 | 0.1550 | | 10.024 | 6.0 | 258 | 10.5335 | 0.2143 | 0.3844 | 0.2450 | 0.7510 | 0.2770 | 0.1887 | 0.2046 | 0.3477 | 0.2576 | 0.5495 | 0.3477 | 0.1755 | | 9.0775 | 7.0 | 301 | 10.4518 | 0.1985 | 0.3927 | 0.2402 | 0.6712 | 0.3039 | 0.2036 | 0.1980 | 0.3828 | 0.2610 | 0.4572 | 0.3828 | 0.1927 | | 8.1542 | 8.0 | 344 | 10.4111 | 0.2090 | 0.4501 | 0.2575 | 0.6283 | 0.3104 | 0.2038 | 0.1943 | 0.4023 | 0.2621 | 0.4116 | 0.4023 | 0.1972 | | 7.7217 | 9.0 | 387 | 10.3686 | 0.2166 | 0.4562 | 0.2648 | 0.6165 | 0.3358 | 0.2205 | 0.2038 | 0.4219 | 0.2748 | 0.4098 | 0.4219 | 0.2166 | | 8.4374 | 10.0 | 430 | 10.3208 | 0.2124 | 0.4636 | 0.2634 | 0.6349 | 0.3430 | 0.2211 | 0.2049 | 0.4258 | 0.2766 | 0.4312 | 0.4258 | 0.2159 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.20.1
readerbench/llama3.2_3b_instruct_qall_lr_small
readerbench
2024-10-28T15:31:17Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T15:25:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SOUMYADEEPSAR/political_bias_deberta-mnli
SOUMYADEEPSAR
2024-10-28T15:30:42Z
10
0
adapter-transformers
[ "adapter-transformers", "deberta", "dataset:mediabiasgroup/mbib-base", "region:us" ]
null
2024-10-28T15:30:39Z
--- tags: - adapter-transformers - deberta datasets: - mediabiasgroup/mbib-base --- # Adapter `SOUMYADEEPSAR/political_bias_deberta-mnli` for microsoft/deberta-base-mnli An [adapter](https://adapterhub.ml) for the `microsoft/deberta-base-mnli` model that was trained on the [mediabiasgroup/mbib-base](https://huggingface.co/datasets/mediabiasgroup/mbib-base/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("microsoft/deberta-base-mnli") adapter_name = model.load_adapter("SOUMYADEEPSAR/political_bias_deberta-mnli", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
kafikani/dynexautotrain2
kafikani
2024-10-28T15:30:38Z
5
0
null
[ "tensorboard", "safetensors", "bert", "autotrain", "text-classification", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "region:us" ]
text-classification
2024-10-28T14:05:03Z
--- tags: - autotrain - text-classification base_model: google-bert/bert-base-uncased widget: - text: "I love AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.5070299506187439 f1_macro: 0.7015964635012253 f1_micro: 0.8336557059961315 f1_weighted: 0.8272333726879182 precision_macro: 0.7312202312202313 precision_micro: 0.8336557059961315 precision_weighted: 0.8302637557956707 recall_macro: 0.692090317090317 recall_micro: 0.8336557059961315 recall_weighted: 0.8336557059961315 accuracy: 0.8336557059961315
g-assismoraes/deberta-semeval25_EN08_WAR_fold4
g-assismoraes
2024-10-28T15:28:23Z
200
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T15:24:30Z
--- library_name: transformers license: mit base_model: microsoft/deberta-v3-base tags: - generated_from_trainer model-index: - name: deberta-semeval25_EN08_WAR_fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-semeval25_EN08_WAR_fold4 This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 8.6077 - Precision Samples: 0.1720 - Recall Samples: 0.4912 - F1 Samples: 0.2394 - Precision Macro: 0.6271 - Recall Macro: 0.3805 - F1 Macro: 0.2080 - Precision Micro: 0.1669 - Recall Micro: 0.4907 - F1 Micro: 0.2491 - Precision Weighted: 0.4263 - Recall Weighted: 0.4907 - F1 Weighted: 0.1966 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | 10.8033 | 1.0 | 43 | 9.3728 | 0.1378 | 0.2438 | 0.1666 | 0.9162 | 0.1942 | 0.1269 | 0.1474 | 0.2593 | 0.1879 | 0.7785 | 0.2593 | 0.0801 | | 10.1064 | 2.0 | 86 | 9.1982 | 0.1369 | 0.2669 | 0.1707 | 0.8713 | 0.2046 | 0.1301 | 0.1354 | 0.2870 | 0.1840 | 0.6798 | 0.2870 | 0.0892 | | 9.246 | 3.0 | 129 | 9.1481 | 0.1404 | 0.3175 | 0.1842 | 0.8340 | 0.2329 | 0.1419 | 0.1426 | 0.3333 | 0.1997 | 0.6338 | 0.3333 | 0.1101 | | 10.4091 | 4.0 | 172 | 9.0465 | 0.1517 | 0.3645 | 0.2022 | 0.8028 | 0.2641 | 0.1545 | 0.1517 | 0.3843 | 0.2176 | 0.5867 | 0.3843 | 0.1306 | | 9.7993 | 5.0 | 215 | 9.0293 | 0.1560 | 0.3716 | 0.2076 | 0.7276 | 0.3003 | 0.1789 | 0.1514 | 0.3981 | 0.2194 | 0.5169 | 0.3981 | 0.1532 | | 10.5038 | 6.0 | 258 | 8.8051 | 0.1696 | 0.4602 | 0.2349 | 0.6589 | 0.3553 | 0.2003 | 0.1697 | 0.4722 | 0.2497 | 0.4598 | 0.4722 | 0.1771 | | 9.1186 | 7.0 | 301 | 8.7311 | 0.1773 | 0.4607 | 0.2312 | 0.6367 | 0.3637 | 0.2015 | 0.1643 | 0.4722 | 0.2437 | 0.4362 | 0.4722 | 0.1792 | | 9.7935 | 8.0 | 344 | 8.6690 | 0.1797 | 0.4680 | 0.2316 | 0.6339 | 0.3667 | 0.1992 | 0.1629 | 0.4676 | 0.2416 | 0.4367 | 0.4676 | 0.1808 | | 8.1802 | 9.0 | 387 | 8.6158 | 0.1712 | 0.4941 | 0.2400 | 0.6267 | 0.3832 | 0.2082 | 0.1664 | 0.4907 | 0.2485 | 0.4238 | 0.4907 | 0.1938 | | 8.7028 | 10.0 | 430 | 8.6077 | 0.1720 | 0.4912 | 0.2394 | 0.6271 | 0.3805 | 0.2080 | 0.1669 | 0.4907 | 0.2491 | 0.4263 | 0.4907 | 0.1966 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.20.1
zelk12/MT2-Gen1-gemma-2-9B
zelk12
2024-10-28T15:28:09Z
8
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "mergekit", "merge", "conversational", "base_model:zelk12/MT2-Gen1-IB-gemma-2-9B", "base_model:merge:zelk12/MT2-Gen1-IB-gemma-2-9B", "base_model:zelk12/MT2-Gen1-MMMUMAG-gemma-2-9B", "base_model:merge:zelk12/MT2-Gen1-MMMUMAG-gemma-2-9B", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-24T16:40:18Z
--- library_name: transformers tags: - mergekit - merge base_model: - zelk12/MT2-Gen1-MMMUMAG-gemma-2-9B - zelk12/MT2-Gen1-IB-gemma-2-9B model-index: - name: MT2-Gen1-gemma-2-9B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 78.56 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT2-Gen1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 44.14 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT2-Gen1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 10.12 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT2-Gen1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 12.42 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT2-Gen1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 12.01 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT2-Gen1-gemma-2-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 37.52 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=zelk12/MT2-Gen1-gemma-2-9B name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [zelk12/MT2-Gen1-MMMUMAG-gemma-2-9B](https://huggingface.co/zelk12/MT2-Gen1-MMMUMAG-gemma-2-9B) * [zelk12/MT2-Gen1-IB-gemma-2-9B](https://huggingface.co/zelk12/MT2-Gen1-IB-gemma-2-9B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zelk12/MT2-Gen1-IB-gemma-2-9B - model: zelk12/MT2-Gen1-MMMUMAG-gemma-2-9B merge_method: slerp base_model: zelk12/MT2-Gen1-IB-gemma-2-9B dtype: bfloat16 parameters: t: 0.666666667 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_zelk12__MT2-Gen1-gemma-2-9B) | Metric |Value| |-------------------|----:| |Avg. |32.46| |IFEval (0-Shot) |78.56| |BBH (3-Shot) |44.14| |MATH Lvl 5 (4-Shot)|10.12| |GPQA (0-shot) |12.42| |MuSR (0-shot) |12.01| |MMLU-PRO (5-shot) |37.52|
MaziyarPanahi/L3-Nymeria-8B-GGUF
MaziyarPanahi
2024-10-28T15:22:27Z
38
3
null
[ "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "base_model:tannedbum/L3-Nymeria-8B", "base_model:quantized:tannedbum/L3-Nymeria-8B", "region:us", "conversational" ]
text-generation
2024-10-28T14:55:23Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - text-generation model_name: L3-Nymeria-8B-GGUF base_model: tannedbum/L3-Nymeria-8B inference: false model_creator: tannedbum pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/L3-Nymeria-8B-GGUF](https://huggingface.co/MaziyarPanahi/L3-Nymeria-8B-GGUF) - Model creator: [tannedbum](https://huggingface.co/tannedbum) - Original model: [tannedbum/L3-Nymeria-8B](https://huggingface.co/tannedbum/L3-Nymeria-8B) ## Description [MaziyarPanahi/L3-Nymeria-8B-GGUF](https://huggingface.co/MaziyarPanahi/L3-Nymeria-8B-GGUF) contains GGUF format model files for [tannedbum/L3-Nymeria-8B](https://huggingface.co/tannedbum/L3-Nymeria-8B). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
aigcode/AIGCode-3B-7B-chat-v0.1
aigcode
2024-10-28T15:08:42Z
13
2
null
[ "safetensors", "hf_aigcodexmoe", "license:apache-2.0", "region:us" ]
null
2024-10-27T16:16:20Z
--- license: apache-2.0 ---
CohenQu/observation_200000
CohenQu
2024-10-28T15:07:42Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T14:50:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
g-assismoraes/deberta-large-semeval25_EN08_fold5
g-assismoraes
2024-10-28T15:07:37Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "base_model:finetune:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-10-28T14:52:46Z
--- library_name: transformers license: mit base_model: microsoft/deberta-v3-large tags: - generated_from_trainer model-index: - name: deberta-large-semeval25_EN08_fold5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-large-semeval25_EN08_fold5 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.7182 - Precision Samples: 0.1208 - Recall Samples: 0.8208 - F1 Samples: 0.2037 - Precision Macro: 0.3884 - Recall Macro: 0.6861 - F1 Macro: 0.2590 - Precision Micro: 0.1199 - Recall Micro: 0.7958 - F1 Micro: 0.2083 - Precision Weighted: 0.2373 - Recall Weighted: 0.7958 - F1 Weighted: 0.2493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:| | 8.4241 | 1.0 | 73 | 9.1963 | 0.1263 | 0.4598 | 0.1863 | 0.8558 | 0.3082 | 0.2345 | 0.1268 | 0.3514 | 0.1863 | 0.5821 | 0.3514 | 0.1117 | | 9.9703 | 2.0 | 146 | 8.5160 | 0.1313 | 0.6118 | 0.1989 | 0.7084 | 0.4108 | 0.2495 | 0.1110 | 0.5495 | 0.1848 | 0.4273 | 0.5495 | 0.1497 | | 8.6571 | 3.0 | 219 | 8.3760 | 0.1104 | 0.6956 | 0.1813 | 0.5869 | 0.4853 | 0.2363 | 0.1057 | 0.6366 | 0.1814 | 0.3062 | 0.6366 | 0.1831 | | 9.387 | 4.0 | 292 | 8.1585 | 0.1134 | 0.7748 | 0.1885 | 0.5228 | 0.6063 | 0.2487 | 0.1050 | 0.7447 | 0.1840 | 0.2682 | 0.7447 | 0.2017 | | 8.4583 | 5.0 | 365 | 8.1996 | 0.1173 | 0.7660 | 0.1960 | 0.4457 | 0.6482 | 0.2512 | 0.1156 | 0.7417 | 0.2001 | 0.2496 | 0.7417 | 0.2253 | | 6.3786 | 6.0 | 438 | 7.6840 | 0.1057 | 0.8007 | 0.1802 | 0.4090 | 0.6701 | 0.2410 | 0.1031 | 0.7838 | 0.1822 | 0.2405 | 0.7838 | 0.2289 | | 8.2131 | 7.0 | 511 | 7.8402 | 0.1154 | 0.8003 | 0.1953 | 0.3992 | 0.6695 | 0.2514 | 0.1125 | 0.7688 | 0.1963 | 0.2317 | 0.7688 | 0.2324 | | 6.8285 | 8.0 | 584 | 7.7532 | 0.1177 | 0.8106 | 0.1991 | 0.3970 | 0.6775 | 0.2552 | 0.1173 | 0.7808 | 0.2040 | 0.2350 | 0.7808 | 0.2416 | | 5.5413 | 9.0 | 657 | 7.7258 | 0.1201 | 0.8140 | 0.2027 | 0.3872 | 0.6811 | 0.2571 | 0.1187 | 0.7838 | 0.2062 | 0.2364 | 0.7838 | 0.2474 | | 5.9931 | 10.0 | 730 | 7.7182 | 0.1208 | 0.8208 | 0.2037 | 0.3884 | 0.6861 | 0.2590 | 0.1199 | 0.7958 | 0.2083 | 0.2373 | 0.7958 | 0.2493 | ### Framework versions - Transformers 4.46.0 - Pytorch 2.3.1 - Datasets 2.21.0 - Tokenizers 0.20.1
RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf
RichardErkhov
2024-10-28T15:06:25Z
13
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-28T12:30:08Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Falcon2-5.5B-Swedish - GGUF - Model creator: https://huggingface.co/ssmits/ - Original model: https://huggingface.co/ssmits/Falcon2-5.5B-Swedish/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Falcon2-5.5B-Swedish.Q2_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q2_K.gguf) | Q2_K | 2.03GB | | [Falcon2-5.5B-Swedish.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q3_K_S.gguf) | Q3_K_S | 2.35GB | | [Falcon2-5.5B-Swedish.Q3_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q3_K.gguf) | Q3_K | 2.56GB | | [Falcon2-5.5B-Swedish.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q3_K_M.gguf) | Q3_K_M | 2.56GB | | [Falcon2-5.5B-Swedish.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q3_K_L.gguf) | Q3_K_L | 2.72GB | | [Falcon2-5.5B-Swedish.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.IQ4_XS.gguf) | IQ4_XS | 2.87GB | | [Falcon2-5.5B-Swedish.Q4_0.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q4_0.gguf) | Q4_0 | 2.99GB | | [Falcon2-5.5B-Swedish.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.IQ4_NL.gguf) | IQ4_NL | 3.01GB | | [Falcon2-5.5B-Swedish.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q4_K_S.gguf) | Q4_K_S | 2.99GB | | [Falcon2-5.5B-Swedish.Q4_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q4_K.gguf) | Q4_K | 3.19GB | | [Falcon2-5.5B-Swedish.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q4_K_M.gguf) | Q4_K_M | 3.19GB | | [Falcon2-5.5B-Swedish.Q4_1.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q4_1.gguf) | Q4_1 | 3.29GB | | [Falcon2-5.5B-Swedish.Q5_0.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q5_0.gguf) | Q5_0 | 3.6GB | | [Falcon2-5.5B-Swedish.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q5_K_S.gguf) | Q5_K_S | 3.6GB | | [Falcon2-5.5B-Swedish.Q5_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q5_K.gguf) | Q5_K | 3.8GB | | [Falcon2-5.5B-Swedish.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q5_K_M.gguf) | Q5_K_M | 3.8GB | | [Falcon2-5.5B-Swedish.Q5_1.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q5_1.gguf) | Q5_1 | 3.9GB | | [Falcon2-5.5B-Swedish.Q6_K.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q6_K.gguf) | Q6_K | 4.24GB | | [Falcon2-5.5B-Swedish.Q8_0.gguf](https://huggingface.co/RichardErkhov/ssmits_-_Falcon2-5.5B-Swedish-gguf/blob/main/Falcon2-5.5B-Swedish.Q8_0.gguf) | Q8_0 | 5.41GB | Original model description: --- base_model: - tiiuae/falcon-11B library_name: transformers tags: - mergekit - merge - lazymergekit - tiiuae/falcon-11B license: apache-2.0 language: - sv --- ## Why prune? Even though [Falcon-11B](https://huggingface.co/tiiuae/falcon-11B) is trained on 5T tokens, it is still undertrained, as can be seen by this graph: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/660c0a02cf274b3ab77dd6b7/QeaL9bOrPskustzFpjMUP.png) This is why the choice is made to prune 50% of the layers. Note that \~1B of continued pre-training (\~1M rows of 1k tokens) is still required to restore the perplexity of this model in the desired language. I'm planning on doing that for certain languages, depending on how much compute will be available. # sliced This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [tiiuae/falcon-11B](https://huggingface.co/tiiuae/falcon-11B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: tiiuae/falcon-11B layer_range: [0, 25] - sources: - model: tiiuae/falcon-11B layer_range: [56, 59] merge_method: passthrough dtype: bfloat16 ``` [PruneMe](https://github.com/arcee-ai/PruneMe) has been utilized using the wikimedia/wikipedia Swedish (sv) subset by investigating layer similarity with 2000 samples. The layer ranges for pruning were determined based on this analysis to maintain performance while reducing model size. ![Layer Similarity Plot](https://cdn-uploads.huggingface.co/production/uploads/660c0a02cf274b3ab77dd6b7/aS5jGo6KLv6BsmW_aO4PB.png) ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "ssmits/Falcon2-5.5B-Swedish" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, ) sequences = pipeline( "Can you explain the concepts of Quantum Computing?", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon). ## Direct Use Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.) ## Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon2-5.5B is trained mostly on English, but also German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ## Recommendations We recommend users of Falcon2-5.5B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
async0x42/Qwen2.5-0.5B-exl2_4.0bpw
async0x42
2024-10-28T15:04:37Z
6
0
transformers
[ "transformers", "qwen2", "text-generation", "conversational", "en", "arxiv:2407.10671", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "exl2", "region:us" ]
text-generation
2024-10-28T15:04:13Z
--- license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B/blob/main/LICENSE language: - en pipeline_tag: text-generation library_name: transformers --- # Qwen2.5-0.5B ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the base 0.5B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 0.49B - Number of Paramaters (Non-Embedding): 0.36B - Number of Layers: 24 - Number of Attention Heads (GQA): 14 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
CohenQu/observation_50000
CohenQu
2024-10-28T15:00:45Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-28T14:51:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]