modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-07-14 12:27:51
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
520 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-07-14 12:25:52
card
stringlengths
11
1.01M
Niggendar/ponymatureSDXL_ponyeclipse10
Niggendar
2024-05-24T18:52:08Z
87
2
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-24T18:46:59Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
thesven/Yi-1.5-9B-Chat-GGUF
thesven
2024-05-24T18:39:06Z
6
0
null
[ "gguf", "arxiv:2403.04652", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-24T12:19:00Z
--- license: apache-2.0 --- <div align="center"> <picture> <img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px"> </picture> </div> <p align="center"> <a href="https://github.com/01-ai">🐙 GitHub</a> • <a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> • <a href="https://twitter.com/01ai_yi">🐤 Twitter</a> • <a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a> <br/> <a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> • <a href="https://01-ai.github.io/">💪 Tech Blog</a> • <a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> • <a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a> </p> # Quantization Description <div style="text-align: center;"> <a href="https://github.com/thesven/GGUF-n-Go"> <img src="https://github.com/thesven/GGUF-n-Go/blob/main/assets/quantized_with.png?raw=true" alt="image/png" style="max-width: 350px;"> </a> </div> This repo contains GGUF quantized versions of the Yi 1.5 9B Chat model. The model is supplied in different quantizations so that you can see what works best on the hardware you would like to run it on. The repo contains quantizations in the following types: - Q4_0 - Q4_1 - Q4_K - Q4_K_S - Q4_K_M - Q5_0 - Q5_1 - Q5_K - Q5_K_M - Q5_K_S - Q6_K - Q8_0 - Q2_K - Q3_K - Q3_K_S - Q3_K_XS - IQ2_K - IQ3_S - IQ3_XXS - IQ4_NL - IQ4_XS - IQ5_K - IQ2_S - IQ2_XS - IQ1_S # Intro Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension. <div align="center"> Model | Context Length | Pre-trained Tokens | :------------: | :------------: | :------------: | | Yi-1.5 | 4K, 16K, 32K | 3.6T </div> # Models - Chat models <div align="center"> | Name | Download | | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| </div> - Base models <div align="center"> | Name | Download | | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| | Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) • [🔍 wisemodel](https://wisemodel.cn/organization/01.AI)| </div> # Benchmarks - Chat models Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/KcsJ9Oc1VnEmfCDEJc5cd.png) Yi-1.5-9B-Chat is the top performer among similarly sized open-source models. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/xf6pLg5jqRCwjlh6m3t6_.png) - Base models Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/BwU7QM-03dZvZzwdIE1xY.png) Yi-1.5-9B is the top performer among similarly sized open-source models. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/y-EYSYPT-3aWLJ0x8R94F.png) # Quick Start For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
ben-wycliff/sexed-llama2-7b-sft-lora-v1
ben-wycliff
2024-05-24T18:37:28Z
4
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-2-7b-chat-hf", "base_model:adapter:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-05-22T20:02:08Z
--- library_name: peft base_model: meta-llama/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
Kalloniatis/Humor-Recognition-Greek-mDeBERTa
Kalloniatis
2024-05-24T18:35:59Z
106
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "el", "dataset:kallantis/Greek-Humorous-Dataset", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-24T17:16:49Z
--- library_name: transformers datasets: - kallantis/Greek-Humorous-Dataset language: - el pipeline_tag: text-classification --- # # # This model is based on mDeBERTa ("microsoft/mdeberta-v3-base") and was fine-tuned for Humor Recognition in Greek language. # ## Model Details The model was pre-trained over 10 epochs on Greek Humorous Dataset ## Pre-processing details The text needs to be pre-processed by removing all greek diacritics and punctuation and converting all letters to lowercase ## Load Pretrained Model ```python from transformers import DebertaV2Tokenizer, DebertaV2ForSequenceClassification import torch tokenizer = DebertaV2Tokenizer.from_pretrained('kallantis/Humor-Recognition-Greek-mDeBERTa') model = DebertaV2ForSequenceClassification.from_pretrained('kallantis/Humor-Recognition-Greek-mDeBERTa')
dreamgen/llama3-8b-assistant-test-run1-sft-e2
dreamgen
2024-05-24T18:35:16Z
62
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:cc", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T21:17:37Z
--- license: cc --- - **What is this?** Nothing interesting, just an experiment. - **License:** CC-BY-NC ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------------|------:|--------------|-----:|---|-----:| |all | |acc |0.6606|± |0.0323| | | |acc_norm |0.6669|± |0.0093| | | |truthfulqa_mc1|0.3403|± |0.0166| | | |truthfulqa_mc2|0.5085|± |0.0153| | | |qem |0.7020|± |0.0126| |leaderboard:arc:challenge:25 | 0|acc |0.5683|± |0.0145| | | |acc_norm |0.5469|± |0.0145| |leaderboard:gsm8k:5 | 0|qem |0.7020|± |0.0126| |leaderboard:hellaswag:10 | 0|acc |0.5990|± |0.0049| | | |acc_norm |0.7868|± |0.0041| |leaderboard:mmlu:_average:5 | |acc |0.6621|± |0.0334| |leaderboard:mmlu:abstract_algebra:5 | 0|acc |0.3100|± |0.0465| |leaderboard:mmlu:anatomy:5 | 0|acc |0.6667|± |0.0407| |leaderboard:mmlu:astronomy:5 | 0|acc |0.7368|± |0.0358| |leaderboard:mmlu:business_ethics:5 | 0|acc |0.7000|± |0.0461| |leaderboard:mmlu:clinical_knowledge:5 | 0|acc |0.7434|± |0.0269| |leaderboard:mmlu:college_biology:5 | 0|acc |0.7847|± |0.0344| |leaderboard:mmlu:college_chemistry:5 | 0|acc |0.4500|± |0.0500| |leaderboard:mmlu:college_computer_science:5 | 0|acc |0.4900|± |0.0502| |leaderboard:mmlu:college_mathematics:5 | 0|acc |0.4000|± |0.0492| |leaderboard:mmlu:college_medicine:5 | 0|acc |0.6301|± |0.0368| |leaderboard:mmlu:college_physics:5 | 0|acc |0.4412|± |0.0494| |leaderboard:mmlu:computer_security:5 | 0|acc |0.7900|± |0.0409| |leaderboard:mmlu:conceptual_physics:5 | 0|acc |0.6043|± |0.0320| |leaderboard:mmlu:econometrics:5 | 0|acc |0.5175|± |0.0470| |leaderboard:mmlu:electrical_engineering:5 | 0|acc |0.6207|± |0.0404| |leaderboard:mmlu:elementary_mathematics:5 | 0|acc |0.4021|± |0.0253| |leaderboard:mmlu:formal_logic:5 | 0|acc |0.5238|± |0.0447| |leaderboard:mmlu:global_facts:5 | 0|acc |0.4400|± |0.0499| |leaderboard:mmlu:high_school_biology:5 | 0|acc |0.8000|± |0.0228| |leaderboard:mmlu:high_school_chemistry:5 | 0|acc |0.5419|± |0.0351| |leaderboard:mmlu:high_school_computer_science:5 | 0|acc |0.7200|± |0.0451| |leaderboard:mmlu:high_school_european_history:5 | 0|acc |0.7636|± |0.0332| |leaderboard:mmlu:high_school_geography:5 | 0|acc |0.7828|± |0.0294| |leaderboard:mmlu:high_school_government_and_politics:5| 0|acc |0.8756|± |0.0238| |leaderboard:mmlu:high_school_macroeconomics:5 | 0|acc |0.6744|± |0.0238| |leaderboard:mmlu:high_school_mathematics:5 | 0|acc |0.3630|± |0.0293| |leaderboard:mmlu:high_school_microeconomics:5 | 0|acc |0.7563|± |0.0279| |leaderboard:mmlu:high_school_physics:5 | 0|acc |0.3775|± |0.0396| |leaderboard:mmlu:high_school_psychology:5 | 0|acc |0.8569|± |0.0150| |leaderboard:mmlu:high_school_statistics:5 | 0|acc |0.4769|± |0.0341| |leaderboard:mmlu:high_school_us_history:5 | 0|acc |0.8039|± |0.0279| |leaderboard:mmlu:high_school_world_history:5 | 0|acc |0.8439|± |0.0236| |leaderboard:mmlu:human_aging:5 | 0|acc |0.7399|± |0.0294| |leaderboard:mmlu:human_sexuality:5 | 0|acc |0.7710|± |0.0369| |leaderboard:mmlu:international_law:5 | 0|acc |0.7521|± |0.0394| |leaderboard:mmlu:jurisprudence:5 | 0|acc |0.7593|± |0.0413| |leaderboard:mmlu:logical_fallacies:5 | 0|acc |0.7423|± |0.0344| |leaderboard:mmlu:machine_learning:5 | 0|acc |0.5357|± |0.0473| |leaderboard:mmlu:management:5 | 0|acc |0.8447|± |0.0359| |leaderboard:mmlu:marketing:5 | 0|acc |0.8974|± |0.0199| |leaderboard:mmlu:medical_genetics:5 | 0|acc |0.7400|± |0.0441| |leaderboard:mmlu:miscellaneous:5 | 0|acc |0.8174|± |0.0138| |leaderboard:mmlu:moral_disputes:5 | 0|acc |0.7197|± |0.0242| |leaderboard:mmlu:moral_scenarios:5 | 0|acc |0.4168|± |0.0165| |leaderboard:mmlu:nutrition:5 | 0|acc |0.7516|± |0.0247| |leaderboard:mmlu:philosophy:5 | 0|acc |0.7170|± |0.0256| |leaderboard:mmlu:prehistory:5 | 0|acc |0.7222|± |0.0249| |leaderboard:mmlu:professional_accounting:5 | 0|acc |0.5213|± |0.0298| |leaderboard:mmlu:professional_law:5 | 0|acc |0.4759|± |0.0128| |leaderboard:mmlu:professional_medicine:5 | 0|acc |0.7537|± |0.0262| |leaderboard:mmlu:professional_psychology:5 | 0|acc |0.6993|± |0.0186| |leaderboard:mmlu:public_relations:5 | 0|acc |0.7182|± |0.0431| |leaderboard:mmlu:security_studies:5 | 0|acc |0.7673|± |0.0270| |leaderboard:mmlu:sociology:5 | 0|acc |0.8259|± |0.0268| |leaderboard:mmlu:us_foreign_policy:5 | 0|acc |0.8300|± |0.0378| |leaderboard:mmlu:virology:5 | 0|acc |0.5181|± |0.0389| |leaderboard:mmlu:world_religions:5 | 0|acc |0.8129|± |0.0299| |leaderboard:truthfulqa:mc:0 | 0|truthfulqa_mc1|0.3403|± |0.0166| | | |truthfulqa_mc2|0.5085|± |0.0153| |leaderboard:winogrande:5 | 0|acc |0.7309|± |0.0125| ``` Baseline: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------------|------:|--------------|-----:|---|-----:| |all | |acc |0.6635|± |0.0322| | | |acc_norm |0.6569|± |0.0094| | | |truthfulqa_mc1|0.3745|± |0.0169| | | |truthfulqa_mc2|0.5338|± |0.0160| | | |qem |0.6808|± |0.0128| |leaderboard:arc:challenge:25 | 0|acc |0.5742|± |0.0144| | | |acc_norm |0.5828|± |0.0144| |leaderboard:gsm8k:5 | 0|qem |0.6808|± |0.0128| |leaderboard:hellaswag:10 | 0|acc |0.5707|± |0.0049| | | |acc_norm |0.7310|± |0.0044| |leaderboard:mmlu:_average:5 | |acc |0.6662|± |0.0333| |leaderboard:mmlu:abstract_algebra:5 | 0|acc |0.3300|± |0.0473| |leaderboard:mmlu:anatomy:5 | 0|acc |0.6815|± |0.0402| |leaderboard:mmlu:astronomy:5 | 0|acc |0.7500|± |0.0352| |leaderboard:mmlu:business_ethics:5 | 0|acc |0.7000|± |0.0461| |leaderboard:mmlu:clinical_knowledge:5 | 0|acc |0.7472|± |0.0267| |leaderboard:mmlu:college_biology:5 | 0|acc |0.7917|± |0.0340| |leaderboard:mmlu:college_chemistry:5 | 0|acc |0.4500|± |0.0500| |leaderboard:mmlu:college_computer_science:5 | 0|acc |0.5200|± |0.0502| |leaderboard:mmlu:college_mathematics:5 | 0|acc |0.3900|± |0.0490| |leaderboard:mmlu:college_medicine:5 | 0|acc |0.6590|± |0.0361| |leaderboard:mmlu:college_physics:5 | 0|acc |0.4314|± |0.0493| |leaderboard:mmlu:computer_security:5 | 0|acc |0.7900|± |0.0409| |leaderboard:mmlu:conceptual_physics:5 | 0|acc |0.5872|± |0.0322| |leaderboard:mmlu:econometrics:5 | 0|acc |0.5439|± |0.0469| |leaderboard:mmlu:electrical_engineering:5 | 0|acc |0.6138|± |0.0406| |leaderboard:mmlu:elementary_mathematics:5 | 0|acc |0.4683|± |0.0257| |leaderboard:mmlu:formal_logic:5 | 0|acc |0.5317|± |0.0446| |leaderboard:mmlu:global_facts:5 | 0|acc |0.4600|± |0.0501| |leaderboard:mmlu:high_school_biology:5 | 0|acc |0.8065|± |0.0225| |leaderboard:mmlu:high_school_chemistry:5 | 0|acc |0.5419|± |0.0351| |leaderboard:mmlu:high_school_computer_science:5 | 0|acc |0.6800|± |0.0469| |leaderboard:mmlu:high_school_european_history:5 | 0|acc |0.7394|± |0.0343| |leaderboard:mmlu:high_school_geography:5 | 0|acc |0.8131|± |0.0278| |leaderboard:mmlu:high_school_government_and_politics:5| 0|acc |0.8964|± |0.0220| |leaderboard:mmlu:high_school_macroeconomics:5 | 0|acc |0.6769|± |0.0237| |leaderboard:mmlu:high_school_mathematics:5 | 0|acc |0.3259|± |0.0286| |leaderboard:mmlu:high_school_microeconomics:5 | 0|acc |0.7563|± |0.0279| |leaderboard:mmlu:high_school_physics:5 | 0|acc |0.4106|± |0.0402| |leaderboard:mmlu:high_school_psychology:5 | 0|acc |0.8477|± |0.0154| |leaderboard:mmlu:high_school_statistics:5 | 0|acc |0.4769|± |0.0341| |leaderboard:mmlu:high_school_us_history:5 | 0|acc |0.7892|± |0.0286| |leaderboard:mmlu:high_school_world_history:5 | 0|acc |0.8397|± |0.0239| |leaderboard:mmlu:human_aging:5 | 0|acc |0.7265|± |0.0299| |leaderboard:mmlu:human_sexuality:5 | 0|acc |0.7939|± |0.0355| |leaderboard:mmlu:international_law:5 | 0|acc |0.7686|± |0.0385| |leaderboard:mmlu:jurisprudence:5 | 0|acc |0.7593|± |0.0413| |leaderboard:mmlu:logical_fallacies:5 | 0|acc |0.7607|± |0.0335| |leaderboard:mmlu:machine_learning:5 | 0|acc |0.5268|± |0.0474| |leaderboard:mmlu:management:5 | 0|acc |0.8155|± |0.0384| |leaderboard:mmlu:marketing:5 | 0|acc |0.9060|± |0.0191| |leaderboard:mmlu:medical_genetics:5 | 0|acc |0.7900|± |0.0409| |leaderboard:mmlu:miscellaneous:5 | 0|acc |0.8238|± |0.0136| |leaderboard:mmlu:moral_disputes:5 | 0|acc |0.7399|± |0.0236| |leaderboard:mmlu:moral_scenarios:5 | 0|acc |0.4358|± |0.0166| |leaderboard:mmlu:nutrition:5 | 0|acc |0.7549|± |0.0246| |leaderboard:mmlu:philosophy:5 | 0|acc |0.7331|± |0.0251| |leaderboard:mmlu:prehistory:5 | 0|acc |0.7469|± |0.0242| |leaderboard:mmlu:professional_accounting:5 | 0|acc |0.5177|± |0.0298| |leaderboard:mmlu:professional_law:5 | 0|acc |0.4648|± |0.0127| |leaderboard:mmlu:professional_medicine:5 | 0|acc |0.7279|± |0.0270| |leaderboard:mmlu:professional_psychology:5 | 0|acc |0.6928|± |0.0187| |leaderboard:mmlu:public_relations:5 | 0|acc |0.6636|± |0.0453| |leaderboard:mmlu:security_studies:5 | 0|acc |0.7306|± |0.0284| |leaderboard:mmlu:sociology:5 | 0|acc |0.8557|± |0.0248| |leaderboard:mmlu:us_foreign_policy:5 | 0|acc |0.8600|± |0.0349| |leaderboard:mmlu:virology:5 | 0|acc |0.5361|± |0.0388| |leaderboard:mmlu:world_religions:5 | 0|acc |0.7953|± |0.0309| |leaderboard:truthfulqa:mc:0 | 0|truthfulqa_mc1|0.3745|± |0.0169| | | |truthfulqa_mc2|0.5338|± |0.0160| |leaderboard:winogrande:5 | 0|acc |0.6930|± |0.0130| ```
docketanalyzer/docket-lm-xs
docketanalyzer
2024-05-24T18:32:42Z
109
0
transformers
[ "transformers", "safetensors", "deberta-v2", "fill-mask", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-24T18:19:06Z
--- license: apache-2.0 ---
NPDIXON/Test
NPDIXON
2024-05-24T18:32:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-24T18:32:08Z
--- license: apache-2.0 ---
datek/gemma-2b-flock-1716574466
datek
2024-05-24T18:16:58Z
152
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T18:14:27Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
saransh03sharma/mintrec2-llama-2-7b-200-5
saransh03sharma
2024-05-24T18:11:39Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T18:06:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Holarissun/REPROD_dpo_helpfulhelpful_gpt4_subset-1_modelgemma2b_maxsteps10000_bz8_lr1e-05
Holarissun
2024-05-24T18:10:45Z
0
0
peft
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:google/gemma-2b", "base_model:adapter:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-24T18:10:41Z
--- license: gemma library_name: peft tags: - trl - dpo - generated_from_trainer base_model: google/gemma-2b model-index: - name: REPROD_dpo_helpfulhelpful_gpt4_subset-1_modelgemma2b_maxsteps10000_bz8_lr1e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # REPROD_dpo_helpfulhelpful_gpt4_subset-1_modelgemma2b_maxsteps10000_bz8_lr1e-05 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 15 - training_steps: 10000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
azizkt/HPY_gpt2_v2AIT
azizkt
2024-05-24T18:08:50Z
219
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T17:05:22Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer model-index: - name: HPY_gpt2_v2AIT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HPY_gpt2_v2AIT This model is a fine-tuned version of [ClassCat/gpt2-base-french](https://huggingface.co/ClassCat/gpt2-base-french) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3600 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 370 | 0.3788 | | 0.4184 | 2.0 | 741 | 0.3658 | | 0.3569 | 3.0 | 1112 | 0.3616 | | 0.3569 | 3.99 | 1480 | 0.3600 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.13.3
magnifi/phi-3-mini-4k-instruct-attribute-output-4-0524-epoch20
magnifi
2024-05-24T18:08:49Z
82
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T18:05:26Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** magnifi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Kalloniatis/Humor-Recognition-Greek-DistilBERT
Kalloniatis
2024-05-24T18:08:46Z
110
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "el", "dataset:kallantis/Greek-Humorous-Dataset", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-22T15:23:24Z
--- language: - el library_name: transformers datasets: - kallantis/Greek-Humorous-Dataset pipeline_tag: text-classification --- # # # This model is based on mDeBERTa multilingual model ("distilbert/distilbert-base-multilingual-cased") fine-tuned for Humor Recognition in Greek language. # ## Model Details The model was pre-trained over 10 epochs on Greek Humorous Dataset # ## Pre-processing details The text needs to be pre-processed by removing all greek diacritics and punctuation and converting all letters to lowercase ## Load Pretrained Model ```python from transformers import DistilBertTokenizer, DistilBertForSequenceClassification tokenizer = DistilBertTokenizer.from_pretrained("kallantis/Humor-Recognition-Greek-DistilBERT") model = DistilBertForSequenceClassification.from_pretrained("kallantis/Humor-Recognition-Greek-DistilBERT", num_labels=2) ```
nttwt1597/test_v2_cancer_v3
nttwt1597
2024-05-24T18:07:40Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-24T10:09:12Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** nttwt1597 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
wwe180/Llama3-15B-ShenNu-v0.1-Q6_K-GGUF
wwe180
2024-05-24T18:06:47Z
1
0
null
[ "gguf", "15B", "Llama3", "llama-cpp", "gguf-my-repo", "base_model:wwe180/Llama3-15B-lingyang-v0.1", "base_model:quantized:wwe180/Llama3-15B-lingyang-v0.1", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-24T18:06:05Z
--- tags: - 15B - Llama3 - llama-cpp - gguf-my-repo base_model: - wwe180/Llama3-15B-lingyang-v0.1 --- # wwe180/Llama3-15B-ShenNu-v0.1-Q6_K-GGUF This model was converted to GGUF format from [`wwe180/Llama3-15B-ShenNu-v0.1`](https://huggingface.co/wwe180/Llama3-15B-ShenNu-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/wwe180/Llama3-15B-ShenNu-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo wwe180/Llama3-15B-ShenNu-v0.1-Q6_K-GGUF --model llama3-15b-shennu-v0.1-q6_k.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo wwe180/Llama3-15B-ShenNu-v0.1-Q6_K-GGUF --model llama3-15b-shennu-v0.1-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && \ cd llama.cpp && \ make && \ ./main -m llama3-15b-shennu-v0.1-q6_k.gguf -n 128 ```
EleutherAI/Mistral-7B-v0.1-squaring-random-standardized-random-names
EleutherAI
2024-05-24T18:05:56Z
10
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T00:39:56Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_5bpw_exl2
Zoyd
2024-05-24T18:03:57Z
17
0
transformers
[ "transformers", "safetensors", "nanbeige", "text-generation", "llm", "question-answering", "custom_code", "en", "zh", "license:apache-2.0", "autotrain_compatible", "exl2", "region:us" ]
question-answering
2024-05-24T14:14:39Z
--- license: apache-2.0 language: - en - zh metrics: - accuracy library_name: transformers pipeline_tag: question-answering tags: - llm - nanbeige --- **Exllamav2** quant (**exl2** / **3.5 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_2bpw_exl2)**</center> | <center>4803 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_5bpw_exl2)**</center> | <center>5344 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_0bpw_exl2)**</center> | <center>6253 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_5bpw_exl2)**</center> | <center>7161 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_75bpw_exl2)**</center> | <center>7616 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_0bpw_exl2)**</center> | <center>8068 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_25bpw_exl2)**</center> | <center>8522 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-5_0bpw_exl2)**</center> | <center>9883 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_0bpw_exl2)**</center> | <center>11726 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_5bpw_exl2)**</center> | <center>12649 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-8_0bpw_exl2)**</center> | <center>14397 MB</center> | <center>8</center> | <div align="center"> <h1> Nanbeige2-16B-Chat </h1> </div> ## <span id="Introduction">模型介绍(Introduction)</span> Nanbeige2-16B-Chat是南北阁实验室最新研发的160亿参数模型,在预训练中使用4.5T Tokens高质量语料。 在对齐阶段,我们首先使用了100万条样本进行SFT训练,然后用40万高质量且难度较高的样本进行课程学习,再通过人类反馈DPO,得到Nanbeige2-16B-Chat。Nanbeige2-16B-Chat在各个权威测评数据集上都取得了较优的效果。 The Nanbeige2-16B-Chat is the latest 16B model developed by the Nanbeige Lab, which utilized 4.5T tokens of high-quality training data during the training phase. During the alignment phase, we initially trained our model using 1 million samples through Supervised Fine-Tuning (SFT). We then engaged in curriculum learning with 400,000 high-quality samples that presented a greater level of difficulty. Subsequently, we incorporated human feedback through the Dynamic Policy Optimization (DPO), culminating in the development of Nanbeige2-16B-Chat. Nanbeige2-16B-Chat has achieved superior performance across various authoritative benchmark datasets. ## <span id="Inference">模型推理(Inference)</span> ``` from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', use_fast=False, trust_remote_code=True ) model = AutoModelForCausalLM.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', torch_dtype='auto', device_map='auto', trust_remote_code=True ) messages = [ {'role': 'user', 'content': 'Hello'} ] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=False ) input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids output_ids = model.generate(input_ids.to('cuda')) resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True) print(resp) ``` ## <span id="Limitations">局限性(Limitations)</span> 虽然我们在训练过程中非常注重模型的安全性,力求确保其输出符合伦理和法律要求的文本,但由于模型大小和概率生成范式的限制,无法完全避免产生各种不符合预期的输出情况。这些输出可能包含偏见、歧视等有害内容,请勿传播这些内容。我们不承担因传播不良信息而导致的任何后果。 While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information. ## <span id="License">协议(License)</span> 使用 Nanbeige 模型时,您必须遵守 Apache 2.0 许可证和[《南北阁大语言模型许可协议》](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/南北阁大语言模型许可协议.pdf)。如果您打算将 Nanbeige 模型或其衍生产品用于商业目的,请通过以下联系邮箱 [email protected] 提交申请材料,以满足《南北阁大语言模型许可协议》的要求。经过审核后,我们将授予您非排他性、全球范围内、不可转让、不可再许可、可撤销的商业版权许可。 When using the Nanbeige models, you must comply with the Apache 2.0 License and the [License Agreement for Large Language Models Nanbeige](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/License_Agreement_for_Large_Language_Models_Nanbeige.pdf). If you intend to use the Nanbeige Models or its derivatives for commercial purposes, please submit application materials to meet the requirements of the Nanbeige Models Community License Agreement by contacting [email protected]. After review, We will grant you a non-exclusive, worldwide, non-transferable, non-sublicensable and revocable commercial copyright license.
Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_5bpw_exl2
Zoyd
2024-05-24T18:03:40Z
13
0
transformers
[ "transformers", "safetensors", "nanbeige", "text-generation", "llm", "question-answering", "custom_code", "en", "zh", "license:apache-2.0", "autotrain_compatible", "exl2", "region:us" ]
question-answering
2024-05-24T14:15:46Z
--- license: apache-2.0 language: - en - zh metrics: - accuracy library_name: transformers pipeline_tag: question-answering tags: - llm - nanbeige --- **Exllamav2** quant (**exl2** / **6.5 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_2bpw_exl2)**</center> | <center>4803 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_5bpw_exl2)**</center> | <center>5344 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_0bpw_exl2)**</center> | <center>6253 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_5bpw_exl2)**</center> | <center>7161 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_75bpw_exl2)**</center> | <center>7616 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_0bpw_exl2)**</center> | <center>8068 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_25bpw_exl2)**</center> | <center>8522 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-5_0bpw_exl2)**</center> | <center>9883 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_0bpw_exl2)**</center> | <center>11726 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_5bpw_exl2)**</center> | <center>12649 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-8_0bpw_exl2)**</center> | <center>14397 MB</center> | <center>8</center> | <div align="center"> <h1> Nanbeige2-16B-Chat </h1> </div> ## <span id="Introduction">模型介绍(Introduction)</span> Nanbeige2-16B-Chat是南北阁实验室最新研发的160亿参数模型,在预训练中使用4.5T Tokens高质量语料。 在对齐阶段,我们首先使用了100万条样本进行SFT训练,然后用40万高质量且难度较高的样本进行课程学习,再通过人类反馈DPO,得到Nanbeige2-16B-Chat。Nanbeige2-16B-Chat在各个权威测评数据集上都取得了较优的效果。 The Nanbeige2-16B-Chat is the latest 16B model developed by the Nanbeige Lab, which utilized 4.5T tokens of high-quality training data during the training phase. During the alignment phase, we initially trained our model using 1 million samples through Supervised Fine-Tuning (SFT). We then engaged in curriculum learning with 400,000 high-quality samples that presented a greater level of difficulty. Subsequently, we incorporated human feedback through the Dynamic Policy Optimization (DPO), culminating in the development of Nanbeige2-16B-Chat. Nanbeige2-16B-Chat has achieved superior performance across various authoritative benchmark datasets. ## <span id="Inference">模型推理(Inference)</span> ``` from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', use_fast=False, trust_remote_code=True ) model = AutoModelForCausalLM.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', torch_dtype='auto', device_map='auto', trust_remote_code=True ) messages = [ {'role': 'user', 'content': 'Hello'} ] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=False ) input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids output_ids = model.generate(input_ids.to('cuda')) resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True) print(resp) ``` ## <span id="Limitations">局限性(Limitations)</span> 虽然我们在训练过程中非常注重模型的安全性,力求确保其输出符合伦理和法律要求的文本,但由于模型大小和概率生成范式的限制,无法完全避免产生各种不符合预期的输出情况。这些输出可能包含偏见、歧视等有害内容,请勿传播这些内容。我们不承担因传播不良信息而导致的任何后果。 While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information. ## <span id="License">协议(License)</span> 使用 Nanbeige 模型时,您必须遵守 Apache 2.0 许可证和[《南北阁大语言模型许可协议》](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/南北阁大语言模型许可协议.pdf)。如果您打算将 Nanbeige 模型或其衍生产品用于商业目的,请通过以下联系邮箱 [email protected] 提交申请材料,以满足《南北阁大语言模型许可协议》的要求。经过审核后,我们将授予您非排他性、全球范围内、不可转让、不可再许可、可撤销的商业版权许可。 When using the Nanbeige models, you must comply with the Apache 2.0 License and the [License Agreement for Large Language Models Nanbeige](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/License_Agreement_for_Large_Language_Models_Nanbeige.pdf). If you intend to use the Nanbeige Models or its derivatives for commercial purposes, please submit application materials to meet the requirements of the Nanbeige Models Community License Agreement by contacting [email protected]. After review, We will grant you a non-exclusive, worldwide, non-transferable, non-sublicensable and revocable commercial copyright license.
Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_0bpw_exl2
Zoyd
2024-05-24T18:03:23Z
14
0
transformers
[ "transformers", "safetensors", "nanbeige", "text-generation", "llm", "question-answering", "custom_code", "en", "zh", "license:apache-2.0", "autotrain_compatible", "4-bit", "exl2", "region:us" ]
question-answering
2024-05-24T14:15:02Z
--- license: apache-2.0 language: - en - zh metrics: - accuracy library_name: transformers pipeline_tag: question-answering tags: - llm - nanbeige --- **Exllamav2** quant (**exl2** / **4.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_2bpw_exl2)**</center> | <center>4803 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_5bpw_exl2)**</center> | <center>5344 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_0bpw_exl2)**</center> | <center>6253 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_5bpw_exl2)**</center> | <center>7161 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_75bpw_exl2)**</center> | <center>7616 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_0bpw_exl2)**</center> | <center>8068 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_25bpw_exl2)**</center> | <center>8522 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-5_0bpw_exl2)**</center> | <center>9883 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_0bpw_exl2)**</center> | <center>11726 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_5bpw_exl2)**</center> | <center>12649 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-8_0bpw_exl2)**</center> | <center>14397 MB</center> | <center>8</center> | <div align="center"> <h1> Nanbeige2-16B-Chat </h1> </div> ## <span id="Introduction">模型介绍(Introduction)</span> Nanbeige2-16B-Chat是南北阁实验室最新研发的160亿参数模型,在预训练中使用4.5T Tokens高质量语料。 在对齐阶段,我们首先使用了100万条样本进行SFT训练,然后用40万高质量且难度较高的样本进行课程学习,再通过人类反馈DPO,得到Nanbeige2-16B-Chat。Nanbeige2-16B-Chat在各个权威测评数据集上都取得了较优的效果。 The Nanbeige2-16B-Chat is the latest 16B model developed by the Nanbeige Lab, which utilized 4.5T tokens of high-quality training data during the training phase. During the alignment phase, we initially trained our model using 1 million samples through Supervised Fine-Tuning (SFT). We then engaged in curriculum learning with 400,000 high-quality samples that presented a greater level of difficulty. Subsequently, we incorporated human feedback through the Dynamic Policy Optimization (DPO), culminating in the development of Nanbeige2-16B-Chat. Nanbeige2-16B-Chat has achieved superior performance across various authoritative benchmark datasets. ## <span id="Inference">模型推理(Inference)</span> ``` from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', use_fast=False, trust_remote_code=True ) model = AutoModelForCausalLM.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', torch_dtype='auto', device_map='auto', trust_remote_code=True ) messages = [ {'role': 'user', 'content': 'Hello'} ] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=False ) input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids output_ids = model.generate(input_ids.to('cuda')) resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True) print(resp) ``` ## <span id="Limitations">局限性(Limitations)</span> 虽然我们在训练过程中非常注重模型的安全性,力求确保其输出符合伦理和法律要求的文本,但由于模型大小和概率生成范式的限制,无法完全避免产生各种不符合预期的输出情况。这些输出可能包含偏见、歧视等有害内容,请勿传播这些内容。我们不承担因传播不良信息而导致的任何后果。 While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information. ## <span id="License">协议(License)</span> 使用 Nanbeige 模型时,您必须遵守 Apache 2.0 许可证和[《南北阁大语言模型许可协议》](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/南北阁大语言模型许可协议.pdf)。如果您打算将 Nanbeige 模型或其衍生产品用于商业目的,请通过以下联系邮箱 [email protected] 提交申请材料,以满足《南北阁大语言模型许可协议》的要求。经过审核后,我们将授予您非排他性、全球范围内、不可转让、不可再许可、可撤销的商业版权许可。 When using the Nanbeige models, you must comply with the Apache 2.0 License and the [License Agreement for Large Language Models Nanbeige](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/License_Agreement_for_Large_Language_Models_Nanbeige.pdf). If you intend to use the Nanbeige Models or its derivatives for commercial purposes, please submit application materials to meet the requirements of the Nanbeige Models Community License Agreement by contacting [email protected]. After review, We will grant you a non-exclusive, worldwide, non-transferable, non-sublicensable and revocable commercial copyright license.
Zoyd/Nanbeige_Nanbeige2-16B-Chat-8_0bpw_exl2
Zoyd
2024-05-24T18:02:51Z
37
0
transformers
[ "transformers", "safetensors", "nanbeige", "text-generation", "llm", "question-answering", "custom_code", "en", "zh", "license:apache-2.0", "autotrain_compatible", "8-bit", "exl2", "region:us" ]
question-answering
2024-05-24T14:15:57Z
--- license: apache-2.0 language: - en - zh metrics: - accuracy library_name: transformers pipeline_tag: question-answering tags: - llm - nanbeige --- **Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_2bpw_exl2)**</center> | <center>4803 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_5bpw_exl2)**</center> | <center>5344 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_0bpw_exl2)**</center> | <center>6253 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_5bpw_exl2)**</center> | <center>7161 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_75bpw_exl2)**</center> | <center>7616 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_0bpw_exl2)**</center> | <center>8068 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_25bpw_exl2)**</center> | <center>8522 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-5_0bpw_exl2)**</center> | <center>9883 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_0bpw_exl2)**</center> | <center>11726 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_5bpw_exl2)**</center> | <center>12649 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-8_0bpw_exl2)**</center> | <center>14397 MB</center> | <center>8</center> | <div align="center"> <h1> Nanbeige2-16B-Chat </h1> </div> ## <span id="Introduction">模型介绍(Introduction)</span> Nanbeige2-16B-Chat是南北阁实验室最新研发的160亿参数模型,在预训练中使用4.5T Tokens高质量语料。 在对齐阶段,我们首先使用了100万条样本进行SFT训练,然后用40万高质量且难度较高的样本进行课程学习,再通过人类反馈DPO,得到Nanbeige2-16B-Chat。Nanbeige2-16B-Chat在各个权威测评数据集上都取得了较优的效果。 The Nanbeige2-16B-Chat is the latest 16B model developed by the Nanbeige Lab, which utilized 4.5T tokens of high-quality training data during the training phase. During the alignment phase, we initially trained our model using 1 million samples through Supervised Fine-Tuning (SFT). We then engaged in curriculum learning with 400,000 high-quality samples that presented a greater level of difficulty. Subsequently, we incorporated human feedback through the Dynamic Policy Optimization (DPO), culminating in the development of Nanbeige2-16B-Chat. Nanbeige2-16B-Chat has achieved superior performance across various authoritative benchmark datasets. ## <span id="Inference">模型推理(Inference)</span> ``` from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', use_fast=False, trust_remote_code=True ) model = AutoModelForCausalLM.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', torch_dtype='auto', device_map='auto', trust_remote_code=True ) messages = [ {'role': 'user', 'content': 'Hello'} ] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=False ) input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids output_ids = model.generate(input_ids.to('cuda')) resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True) print(resp) ``` ## <span id="Limitations">局限性(Limitations)</span> 虽然我们在训练过程中非常注重模型的安全性,力求确保其输出符合伦理和法律要求的文本,但由于模型大小和概率生成范式的限制,无法完全避免产生各种不符合预期的输出情况。这些输出可能包含偏见、歧视等有害内容,请勿传播这些内容。我们不承担因传播不良信息而导致的任何后果。 While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information. ## <span id="License">协议(License)</span> 使用 Nanbeige 模型时,您必须遵守 Apache 2.0 许可证和[《南北阁大语言模型许可协议》](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/南北阁大语言模型许可协议.pdf)。如果您打算将 Nanbeige 模型或其衍生产品用于商业目的,请通过以下联系邮箱 [email protected] 提交申请材料,以满足《南北阁大语言模型许可协议》的要求。经过审核后,我们将授予您非排他性、全球范围内、不可转让、不可再许可、可撤销的商业版权许可。 When using the Nanbeige models, you must comply with the Apache 2.0 License and the [License Agreement for Large Language Models Nanbeige](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/License_Agreement_for_Large_Language_Models_Nanbeige.pdf). If you intend to use the Nanbeige Models or its derivatives for commercial purposes, please submit application materials to meet the requirements of the Nanbeige Models Community License Agreement by contacting [email protected]. After review, We will grant you a non-exclusive, worldwide, non-transferable, non-sublicensable and revocable commercial copyright license.
Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_0bpw_exl2
Zoyd
2024-05-24T18:02:33Z
36
0
transformers
[ "transformers", "safetensors", "nanbeige", "text-generation", "llm", "question-answering", "custom_code", "en", "zh", "license:apache-2.0", "autotrain_compatible", "6-bit", "exl2", "region:us" ]
question-answering
2024-05-24T14:15:35Z
--- license: apache-2.0 language: - en - zh metrics: - accuracy library_name: transformers pipeline_tag: question-answering tags: - llm - nanbeige --- **Exllamav2** quant (**exl2** / **6.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_2bpw_exl2)**</center> | <center>4803 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_5bpw_exl2)**</center> | <center>5344 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_0bpw_exl2)**</center> | <center>6253 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_5bpw_exl2)**</center> | <center>7161 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_75bpw_exl2)**</center> | <center>7616 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_0bpw_exl2)**</center> | <center>8068 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_25bpw_exl2)**</center> | <center>8522 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-5_0bpw_exl2)**</center> | <center>9883 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_0bpw_exl2)**</center> | <center>11726 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_5bpw_exl2)**</center> | <center>12649 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-8_0bpw_exl2)**</center> | <center>14397 MB</center> | <center>8</center> | <div align="center"> <h1> Nanbeige2-16B-Chat </h1> </div> ## <span id="Introduction">模型介绍(Introduction)</span> Nanbeige2-16B-Chat是南北阁实验室最新研发的160亿参数模型,在预训练中使用4.5T Tokens高质量语料。 在对齐阶段,我们首先使用了100万条样本进行SFT训练,然后用40万高质量且难度较高的样本进行课程学习,再通过人类反馈DPO,得到Nanbeige2-16B-Chat。Nanbeige2-16B-Chat在各个权威测评数据集上都取得了较优的效果。 The Nanbeige2-16B-Chat is the latest 16B model developed by the Nanbeige Lab, which utilized 4.5T tokens of high-quality training data during the training phase. During the alignment phase, we initially trained our model using 1 million samples through Supervised Fine-Tuning (SFT). We then engaged in curriculum learning with 400,000 high-quality samples that presented a greater level of difficulty. Subsequently, we incorporated human feedback through the Dynamic Policy Optimization (DPO), culminating in the development of Nanbeige2-16B-Chat. Nanbeige2-16B-Chat has achieved superior performance across various authoritative benchmark datasets. ## <span id="Inference">模型推理(Inference)</span> ``` from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', use_fast=False, trust_remote_code=True ) model = AutoModelForCausalLM.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', torch_dtype='auto', device_map='auto', trust_remote_code=True ) messages = [ {'role': 'user', 'content': 'Hello'} ] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=False ) input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids output_ids = model.generate(input_ids.to('cuda')) resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True) print(resp) ``` ## <span id="Limitations">局限性(Limitations)</span> 虽然我们在训练过程中非常注重模型的安全性,力求确保其输出符合伦理和法律要求的文本,但由于模型大小和概率生成范式的限制,无法完全避免产生各种不符合预期的输出情况。这些输出可能包含偏见、歧视等有害内容,请勿传播这些内容。我们不承担因传播不良信息而导致的任何后果。 While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information. ## <span id="License">协议(License)</span> 使用 Nanbeige 模型时,您必须遵守 Apache 2.0 许可证和[《南北阁大语言模型许可协议》](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/南北阁大语言模型许可协议.pdf)。如果您打算将 Nanbeige 模型或其衍生产品用于商业目的,请通过以下联系邮箱 [email protected] 提交申请材料,以满足《南北阁大语言模型许可协议》的要求。经过审核后,我们将授予您非排他性、全球范围内、不可转让、不可再许可、可撤销的商业版权许可。 When using the Nanbeige models, you must comply with the Apache 2.0 License and the [License Agreement for Large Language Models Nanbeige](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/License_Agreement_for_Large_Language_Models_Nanbeige.pdf). If you intend to use the Nanbeige Models or its derivatives for commercial purposes, please submit application materials to meet the requirements of the Nanbeige Models Community License Agreement by contacting [email protected]. After review, We will grant you a non-exclusive, worldwide, non-transferable, non-sublicensable and revocable commercial copyright license.
Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_0bpw_exl2
Zoyd
2024-05-24T18:01:50Z
32
0
transformers
[ "transformers", "safetensors", "nanbeige", "text-generation", "llm", "question-answering", "custom_code", "en", "zh", "license:apache-2.0", "autotrain_compatible", "3-bit", "exl2", "region:us" ]
question-answering
2024-05-24T14:14:29Z
--- license: apache-2.0 language: - en - zh metrics: - accuracy library_name: transformers pipeline_tag: question-answering tags: - llm - nanbeige --- **Exllamav2** quant (**exl2** / **3.0 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_2bpw_exl2)**</center> | <center>4803 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_5bpw_exl2)**</center> | <center>5344 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_0bpw_exl2)**</center> | <center>6253 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_5bpw_exl2)**</center> | <center>7161 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_75bpw_exl2)**</center> | <center>7616 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_0bpw_exl2)**</center> | <center>8068 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_25bpw_exl2)**</center> | <center>8522 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-5_0bpw_exl2)**</center> | <center>9883 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_0bpw_exl2)**</center> | <center>11726 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_5bpw_exl2)**</center> | <center>12649 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-8_0bpw_exl2)**</center> | <center>14397 MB</center> | <center>8</center> | <div align="center"> <h1> Nanbeige2-16B-Chat </h1> </div> ## <span id="Introduction">模型介绍(Introduction)</span> Nanbeige2-16B-Chat是南北阁实验室最新研发的160亿参数模型,在预训练中使用4.5T Tokens高质量语料。 在对齐阶段,我们首先使用了100万条样本进行SFT训练,然后用40万高质量且难度较高的样本进行课程学习,再通过人类反馈DPO,得到Nanbeige2-16B-Chat。Nanbeige2-16B-Chat在各个权威测评数据集上都取得了较优的效果。 The Nanbeige2-16B-Chat is the latest 16B model developed by the Nanbeige Lab, which utilized 4.5T tokens of high-quality training data during the training phase. During the alignment phase, we initially trained our model using 1 million samples through Supervised Fine-Tuning (SFT). We then engaged in curriculum learning with 400,000 high-quality samples that presented a greater level of difficulty. Subsequently, we incorporated human feedback through the Dynamic Policy Optimization (DPO), culminating in the development of Nanbeige2-16B-Chat. Nanbeige2-16B-Chat has achieved superior performance across various authoritative benchmark datasets. ## <span id="Inference">模型推理(Inference)</span> ``` from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', use_fast=False, trust_remote_code=True ) model = AutoModelForCausalLM.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', torch_dtype='auto', device_map='auto', trust_remote_code=True ) messages = [ {'role': 'user', 'content': 'Hello'} ] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=False ) input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids output_ids = model.generate(input_ids.to('cuda')) resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True) print(resp) ``` ## <span id="Limitations">局限性(Limitations)</span> 虽然我们在训练过程中非常注重模型的安全性,力求确保其输出符合伦理和法律要求的文本,但由于模型大小和概率生成范式的限制,无法完全避免产生各种不符合预期的输出情况。这些输出可能包含偏见、歧视等有害内容,请勿传播这些内容。我们不承担因传播不良信息而导致的任何后果。 While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information. ## <span id="License">协议(License)</span> 使用 Nanbeige 模型时,您必须遵守 Apache 2.0 许可证和[《南北阁大语言模型许可协议》](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/南北阁大语言模型许可协议.pdf)。如果您打算将 Nanbeige 模型或其衍生产品用于商业目的,请通过以下联系邮箱 [email protected] 提交申请材料,以满足《南北阁大语言模型许可协议》的要求。经过审核后,我们将授予您非排他性、全球范围内、不可转让、不可再许可、可撤销的商业版权许可。 When using the Nanbeige models, you must comply with the Apache 2.0 License and the [License Agreement for Large Language Models Nanbeige](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/License_Agreement_for_Large_Language_Models_Nanbeige.pdf). If you intend to use the Nanbeige Models or its derivatives for commercial purposes, please submit application materials to meet the requirements of the Nanbeige Models Community License Agreement by contacting [email protected]. After review, We will grant you a non-exclusive, worldwide, non-transferable, non-sublicensable and revocable commercial copyright license.
Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_2bpw_exl2
Zoyd
2024-05-24T18:01:38Z
14
0
transformers
[ "transformers", "safetensors", "nanbeige", "text-generation", "llm", "question-answering", "custom_code", "en", "zh", "license:apache-2.0", "autotrain_compatible", "exl2", "region:us" ]
question-answering
2024-05-24T14:14:07Z
--- license: apache-2.0 language: - en - zh metrics: - accuracy library_name: transformers pipeline_tag: question-answering tags: - llm - nanbeige --- **Exllamav2** quant (**exl2** / **2.2 bpw**) made with ExLlamaV2 v0.0.21 Other EXL2 quants: | **Quant** | **Model Size** | **lm_head** | | ----- | ---------- | ------- | |<center>**[2.2](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_2bpw_exl2)**</center> | <center>4803 MB</center> | <center>6</center> | |<center>**[2.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-2_5bpw_exl2)**</center> | <center>5344 MB</center> | <center>6</center> | |<center>**[3.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_0bpw_exl2)**</center> | <center>6253 MB</center> | <center>6</center> | |<center>**[3.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_5bpw_exl2)**</center> | <center>7161 MB</center> | <center>6</center> | |<center>**[3.75](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-3_75bpw_exl2)**</center> | <center>7616 MB</center> | <center>6</center> | |<center>**[4.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_0bpw_exl2)**</center> | <center>8068 MB</center> | <center>6</center> | |<center>**[4.25](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-4_25bpw_exl2)**</center> | <center>8522 MB</center> | <center>6</center> | |<center>**[5.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-5_0bpw_exl2)**</center> | <center>9883 MB</center> | <center>6</center> | |<center>**[6.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_0bpw_exl2)**</center> | <center>11726 MB</center> | <center>8</center> | |<center>**[6.5](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-6_5bpw_exl2)**</center> | <center>12649 MB</center> | <center>8</center> | |<center>**[8.0](https://huggingface.co/Zoyd/Nanbeige_Nanbeige2-16B-Chat-8_0bpw_exl2)**</center> | <center>14397 MB</center> | <center>8</center> | <div align="center"> <h1> Nanbeige2-16B-Chat </h1> </div> ## <span id="Introduction">模型介绍(Introduction)</span> Nanbeige2-16B-Chat是南北阁实验室最新研发的160亿参数模型,在预训练中使用4.5T Tokens高质量语料。 在对齐阶段,我们首先使用了100万条样本进行SFT训练,然后用40万高质量且难度较高的样本进行课程学习,再通过人类反馈DPO,得到Nanbeige2-16B-Chat。Nanbeige2-16B-Chat在各个权威测评数据集上都取得了较优的效果。 The Nanbeige2-16B-Chat is the latest 16B model developed by the Nanbeige Lab, which utilized 4.5T tokens of high-quality training data during the training phase. During the alignment phase, we initially trained our model using 1 million samples through Supervised Fine-Tuning (SFT). We then engaged in curriculum learning with 400,000 high-quality samples that presented a greater level of difficulty. Subsequently, we incorporated human feedback through the Dynamic Policy Optimization (DPO), culminating in the development of Nanbeige2-16B-Chat. Nanbeige2-16B-Chat has achieved superior performance across various authoritative benchmark datasets. ## <span id="Inference">模型推理(Inference)</span> ``` from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', use_fast=False, trust_remote_code=True ) model = AutoModelForCausalLM.from_pretrained( 'Nanbeige/Nanbeige2-16B-Chat', torch_dtype='auto', device_map='auto', trust_remote_code=True ) messages = [ {'role': 'user', 'content': 'Hello'} ] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=False ) input_ids = tokenizer(prompt, add_special_tokens=False, return_tensors='pt').input_ids output_ids = model.generate(input_ids.to('cuda')) resp = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True) print(resp) ``` ## <span id="Limitations">局限性(Limitations)</span> 虽然我们在训练过程中非常注重模型的安全性,力求确保其输出符合伦理和法律要求的文本,但由于模型大小和概率生成范式的限制,无法完全避免产生各种不符合预期的输出情况。这些输出可能包含偏见、歧视等有害内容,请勿传播这些内容。我们不承担因传播不良信息而导致的任何后果。 While we place great emphasis on the safety of the model during the training process, striving to ensure that its outputs align with ethical and legal requirements, it may not completely avoid generating unexpected outputs due to the model's size and probabilistic nature. These outputs may include harmful content such as bias or discrimination. Please don't propagate such content. We do not assume any responsibility for the consequences resulting from the dissemination of inappropriate information. ## <span id="License">协议(License)</span> 使用 Nanbeige 模型时,您必须遵守 Apache 2.0 许可证和[《南北阁大语言模型许可协议》](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/南北阁大语言模型许可协议.pdf)。如果您打算将 Nanbeige 模型或其衍生产品用于商业目的,请通过以下联系邮箱 [email protected] 提交申请材料,以满足《南北阁大语言模型许可协议》的要求。经过审核后,我们将授予您非排他性、全球范围内、不可转让、不可再许可、可撤销的商业版权许可。 When using the Nanbeige models, you must comply with the Apache 2.0 License and the [License Agreement for Large Language Models Nanbeige](https://huggingface.co/Nanbeige/Nanbeige-16B-Base-32k/resolve/main/License_Agreement_for_Large_Language_Models_Nanbeige.pdf). If you intend to use the Nanbeige Models or its derivatives for commercial purposes, please submit application materials to meet the requirements of the Nanbeige Models Community License Agreement by contacting [email protected]. After review, We will grant you a non-exclusive, worldwide, non-transferable, non-sublicensable and revocable commercial copyright license.
allganize/Llama-3-Alpha-Ko-8B-Instruct
allganize
2024-05-24T17:59:18Z
2,305
44
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "arxiv:2403.07691", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T08:08:23Z
--- license: other license_name: llama3 language: - ko --- ![Alpha-Instruct](./alpha-instruct.png) We are thrilled to introduce **Alpha-Instruct**, our latest language model, which demonstrates exceptional capabilities in both Korean and English. Alpha-Instruct is developed using the **Evolutionary Model Merging** technique, enabling it to excel in complex language tasks and logical reasoning. A key aspect of Alpha-Instruct's development is our **community-based approach**. We draw inspiration and ideas from various communities, shaping our datasets, methodologies, and the model itself. In return, we are committed to sharing our insights with the community, providing detailed information on the data, methods, and models used in Alpha-Instruct's creation. Alpha-Instruct has achieved outstanding performance on the **LogicKor, scoring an impressive 6.62**. Remarkably, this performance rivals that of 70B models, showcasing the efficiency and power of our 8B model. This achievement highlights Alpha-Instruct's advanced computational and reasoning skills, making it a leading choice for diverse and demanding language tasks. **For more information and technical details about Alpha-Instruct, stay tuned to our updates and visit our [website](https://allganize-alpha.github.io/) (Soon).** --- ## Overview Alpha-Instruct is our latest language model, developed using 'Evolutionary Model Merging' technique. This method employs a 1:1 ratio of task-specific datasets from KoBEST and Haerae, resulting in a model with named 'Alpha-Ko-8B-Evo'. The following models were used for merging: - [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) (Base) - [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) (Instruct) - [Llama-3-Open-Ko-8B](beomi/Llama-3-Open-Ko-8B) (Continual Pretrained) To refine and enhance Alpha-Instruct, we utilized a carefully curated high-quality datasets aimed at 'healing' the model's output, significantly boosting its human preference scores. We use [ORPO](https://arxiv.org/abs/2403.07691) specifically for this "healing" (RLHF) phase. The datasets* used include: - [Korean-Human-Judgements](https://huggingface.co/datasets/HAERAE-HUB/Korean-Human-Judgements) - [Orca-Math](https://huggingface.co/datasets/kuotient/orca-math-word-problems-193k-korean) - [dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k) *Some of these datasets were partially used and translated for training, and we ensured there was no contamination during the evaluation process. This approach effectively balances human preferences with the model's capabilities, making Alpha-Instruct well-suited for real-life scenarios where user satisfaction and performance are equally important. ## Benchmark Results Results in [LogicKor](https://github.com/StableFluffy/LogicKor)* are as follows: | Model | Single turn* | Multi turn* | Overall* | |:------------------------------:|:------------:|:-----------:|:--------:| | MLP-KTLim/llama-3-Korean-Bllossom-8B | 4.238 | 3.404 | 3.821 | | Alpha-Ko-Evo | 5.143 | 5.238 | 5.190 | | Alpha-Ko-Instruct (alt) | 7.095 | **6.571** | **6.833** | | Alpha-Ko-Instruct | **7.143** | 6.065 | 6.620 | | Alpha-Ko-Instruct-marlin (4bit) | 6.857 | 5.738 | 6.298 | *Self report(Default settings with 'alpha' template, mean of 3). Result in KoBEST(acc, num_shot=5) are as follows: | Task | beomi/Llama-3-Open-Ko-8B-Instruct | maywell/Llama-3-Ko-8B-Instruct | **Alpha-Ko-Evo** | **Alpha-Ko-Instruct** | | --- | --- | --- | --- | --- | | kobest overall | 0.6220 | 0.6852 |0.7229|0.7055 | kobest_boolq| 0.6254 | 0.7208 | 0.8547 | 0.8369 | kobest_copa| 0.7110 | 0.7650 | 0.7420 | 0.7420 | kobest_hellaswag| 0.3840 | 0.4440 | 0.4220 | 0.4240 | kobest_sentineg| 0.8388 | 0.9194 |0.9471 | 0.9244 | kobest_wic| 0.5738| 0.6040 |0.6095 | 0.5730 *For reference, 'merged' models are chosen. ## How to use ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "allganize/Llama-3-Alpha-Ko-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype="auto", device_map="auto", ) messages = [ {"role": "system", "content": "당신은 인공지능 어시스턴트입니다. 묻는 말에 친절하고 정확하게 답변하세요."}, {"role": "user", "content": "피보나치 수열이 뭐야? 그리고 피보나치 수열에 대해 파이썬 코드를 짜줘볼래?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=512, eos_token_id=terminators, do_sample=False, repetition_penalty=1.05, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Correspondence to - Ji soo Kim ([email protected]) - Contributors - Sangmin Jeon ([email protected]) - Seungwoo Ryu ([email protected]) ## Special Thanks - [@beomi](https://huggingface.co/beomi) for providing us with a great model! ## License The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/) ## Citation If you use this model in your research, please cite it as follows: ```bibtex @misc{alpha-instruct, author = {Ji soo Kim}, title = {Alpha-Instruct: Allganize Bilingual Model}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, url = {https://huggingface.co/allganize/Llama-3-Alpha-Ko-8B-Instruct}, }
RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf
RichardErkhov
2024-05-24T17:49:51Z
12
1
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-05-24T13:53:51Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B - GGUF - Model creator: https://huggingface.co/yunconglong/ - Original model: https://huggingface.co/yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q2_K.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q2_K.gguf) | Q2_K | 4.43GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.IQ3_XS.gguf) | IQ3_XS | 4.94GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.IQ3_S.gguf) | IQ3_S | 5.22GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q3_K_S.gguf) | Q3_K_S | 5.2GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.IQ3_M.gguf) | IQ3_M | 5.34GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q3_K.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q3_K.gguf) | Q3_K | 5.78GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q3_K_M.gguf) | Q3_K_M | 5.78GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q3_K_L.gguf) | Q3_K_L | 6.27GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.IQ4_XS.gguf) | IQ4_XS | 6.5GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q4_0.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q4_0.gguf) | Q4_0 | 6.78GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.IQ4_NL.gguf) | IQ4_NL | 6.85GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q4_K_S.gguf) | Q4_K_S | 6.84GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q4_K.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q4_K.gguf) | Q4_K | 7.25GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q4_K_M.gguf) | Q4_K_M | 7.25GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q4_1.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q4_1.gguf) | Q4_1 | 7.52GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q5_0.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q5_0.gguf) | Q5_0 | 8.26GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q5_K_S.gguf) | Q5_K_S | 8.26GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q5_K.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q5_K.gguf) | Q5_K | 8.51GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q5_K_M.gguf) | Q5_K_M | 8.51GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q5_1.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q5_1.gguf) | Q5_1 | 9.01GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q6_K.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q6_K.gguf) | Q6_K | 9.84GB | | [Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q8_0.gguf](https://huggingface.co/RichardErkhov/yunconglong_-_Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B-gguf/blob/main/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B.Q8_0.gguf) | Q8_0 | 12.75GB | Original model description: --- license: mit tags: - moe - DPO - RL-TUNED --- * [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer) with dataset jondurbin/truthy-dpo-v0.1 to improve [TomGrc/FusionNet_7Bx2_MoE_14B] ``` DPO Trainer TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023. ```
John6666/ebara-pony-v1-sdxl
John6666
2024-05-24T17:48:00Z
521
4
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-24T14:52:40Z
--- license: other tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime --- Original model is [here](https://huggingface.co/tsukihara/xl_model).
mjobe105/qlora-dolphindataset
mjobe105
2024-05-24T17:46:56Z
1
0
peft
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-70B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-70B-Instruct", "license:llama3", "4-bit", "bitsandbytes", "region:us" ]
null
2024-05-24T17:32:51Z
--- license: llama3 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Meta-Llama-3-70B-Instruct model-index: - name: workspace/data/out/qlora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-70B-Instruct model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast load_in_8bit: false load_in_4bit: true strict: false datasets: - path: /workspace/data/dataset/hex_phi_dolphin_responses.jsonl ds_type: json type: alpaca dataset_prepared_path: last_run_prepared val_set_size: 0.05 output_dir: /workspace/data/out/qlora adapter: qlora lora_model_dir: sequence_len: 512 sample_packing: false eval_sample_packing: false pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_torch lr_scheduler: cosine learning_rate: 0.00001 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 4 eval_table_size: saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br> # workspace/data/out/qlora This model is a fine-tuned version of [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0876 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 10 - gradient_accumulation_steps: 4 - total_train_batch_size: 80 - total_eval_batch_size: 20 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.7723 | 0.2667 | 1 | 2.0884 | | 1.8176 | 0.5333 | 2 | 2.0872 | | 1.8499 | 0.8 | 3 | 2.0874 | | 1.7963 | 1.0667 | 4 | 2.0865 | | 1.8762 | 1.3333 | 5 | 2.0866 | | 1.7795 | 1.6 | 6 | 2.0875 | | 1.8179 | 1.8667 | 7 | 2.0880 | | 1.8353 | 2.1333 | 8 | 2.0874 | | 1.8009 | 2.4 | 9 | 2.0864 | | 1.7625 | 2.6667 | 10 | 2.0869 | | 1.8273 | 2.9333 | 11 | 2.0874 | | 1.8198 | 3.2 | 12 | 2.0876 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.1.2+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
John6666/anima-pencil-sdxl
John6666
2024-05-24T17:46:52Z
28
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-05-24T13:46:44Z
--- license: other tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime --- Original model is [here](https://huggingface.co/bluepen5805/anima_pencil-XL).
hgnoi/h78SXnOrTyEq8ZJI
hgnoi
2024-05-24T17:46:45Z
136
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T17:45:13Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Joreyolo/stablity_BioMistral_7B_task_arithmetic_0523
Joreyolo
2024-05-24T17:42:43Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T17:31:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CMU-AIR2/math-llama-3-LORA-Arithmetic-6k
CMU-AIR2
2024-05-24T17:40:44Z
0
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-05-24T14:19:19Z
--- library_name: peft base_model: meta-llama/Meta-Llama-3-8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
CMU-AIR2/math-llama-3-LORA-Arithmetic-4k
CMU-AIR2
2024-05-24T17:40:40Z
0
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-05-24T14:16:22Z
--- library_name: peft base_model: meta-llama/Meta-Llama-3-8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
CMU-AIR2/math-llama-3-LORA-Arithmetic-2k
CMU-AIR2
2024-05-24T17:40:35Z
0
0
peft
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "base_model:adapter:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-05-24T14:10:54Z
--- library_name: peft base_model: meta-llama/Meta-Llama-3-8B --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
mogoi/test_01
mogoi
2024-05-24T17:39:10Z
151
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T11:27:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nvidia/dragon-multiturn-context-encoder
nvidia
2024-05-24T17:38:53Z
807,134
24
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "en", "arxiv:2401.10225", "arxiv:2302.07452", "license:other", "endpoints_compatible", "region:us" ]
feature-extraction
2024-04-30T21:21:45Z
--- language: - en tag: - dragon - retriever - conversation - multi-turn - conversational query license: - other --- ## Model Description We introduce Dragon-multiturn, a retriever specifically designed for the conversational QA scenario. It can handle conversational query which combine dialogue history with the current query. It is built on top of the [Dragon](https://huggingface.co/facebook/dragon-plus-query-encoder) retriever. The details of Dragon-multiturn can be found in [here](https://arxiv.org/pdf/2401.10225). **Please note that Dragon-multiturn is a dual encoder consisting of a query encoder and a context encoder. This repository is only for the context encoder of Dragon-multiturn for getting the context embeddings, and you also need the query encoder to get query embeddings, which can be found [here](https://huggingface.co/nvidia/dragon-multiturn-query-encoder). Both query encoder and context encoder share the same tokenizer.** ## Other Resources [Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) &ensp; [Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B) &ensp; [Evaluation Data](https://huggingface.co/datasets/nvidia/ChatRAG-Bench) &ensp; [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data) &ensp; [Website](https://chatqa-project.github.io/) &ensp; [Paper](https://arxiv.org/pdf/2401.10225) ## Benchmark Results <style type="text/css"> .tg {border:none;border-collapse:collapse;border-spacing:0;} .tg td{border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;overflow:hidden; padding:10px 5px;word-break:normal;} .tg th{border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;font-weight:normal; overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:center} .tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:center} </style> <table class="tg"> <thead> <tr> <th class="tg-0pky" rowspan="2"></th> <th class="tg-c3ow" colspan="2">Average</th> <th class="tg-c3ow" colspan="2">Doc2Dial</th> <th class="tg-c3ow" colspan="2">QuAC</th> <th class="tg-c3ow" colspan="2">QReCC</th> <th class="tg-c3ow" colspan="2">TopiOCQA</th> <th class="tg-c3ow" colspan="2">INSCIT</th> </tr> <tr> <th class="tg-c3ow">top-1</th> <th class="tg-c3ow">top-5</th> <th class="tg-c3ow">top-1</th> <th class="tg-c3ow">top-5</th> <th class="tg-c3ow">top-1</th> <th class="tg-c3ow">top-5</th> <th class="tg-c3ow">top-1</th> <th class="tg-c3ow">top-5</th> <th class="tg-c3ow">top-5*</th> <th class="tg-c3ow">top-20*</th> <th class="tg-c3ow">top-5*</th> <th class="tg-c3ow">top-20*</th> </tr> </thead> <tbody> <tr> <td class="tg-0pky">Dragon</td> <td class="tg-c3ow">46.3</td> <td class="tg-c3ow">73.1</td> <td class="tg-c3ow">43.3</td> <td class="tg-c3ow">75.6</td> <td class="tg-c3ow">56.8</td> <td class="tg-c3ow">82.9</td> <td class="tg-c3ow">46.2</td> <td class="tg-c3ow">82.0</td> <td class="tg-c3ow">57.7</td> <td class="tg-c3ow">78.8</td> <td class="tg-c3ow">27.5</td> <td class="tg-c3ow">46.2</td> </tr> <tr> <td class="tg-0pky">Dragon-multiturn</td> <td class="tg-c3ow">53.0</td> <td class="tg-c3ow">81.2</td> <td class="tg-c3ow">48.6</td> <td class="tg-c3ow">83.5</td> <td class="tg-c3ow">54.8</td> <td class="tg-c3ow">83.2</td> <td class="tg-c3ow">49.6</td> <td class="tg-c3ow">86.7</td> <td class="tg-c3ow">64.5</td> <td class="tg-c3ow">85.2</td> <td class="tg-c3ow">47.4</td> <td class="tg-c3ow">67.1</td> </tr> </tbody> </table> Retrieval results across five multi-turn QA datasets (Doc2Dial, QuAC, QReCC, TopiOCQA, INSCIT) with the average top-1 and top-5 recall scores. *Since the average context length in TopiOCQA and INSCIT is smaller than in other datasets, we report top-5 and top-20 to roughly match the context lengths of top-1 and top-5, respectively, in those datasets. ## How to use ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained('nvidia/dragon-multiturn-query-encoder') query_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-query-encoder') context_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-context-encoder') query = [ {"role": "user", "content": "I need help planning my Social Security benefits for my survivors."}, {"role": "agent", "content": "Are you currently planning for your future?"}, {"role": "user", "content": "Yes, I am."} ] contexts = [ "Benefits Planner: Survivors | Planning For Your Survivors \nAs you plan for the future , you'll want to think about what your family would need if you should die now. Social Security can help your family if you have earned enough Social Security credits through your work. You can earn up to four credits each year. In 2019 , for example , you earn one credit for each $1,360 of wages or self - employment income. When you have earned $5,440 , you have earned your four credits for the year. The number of credits needed to provide benefits for your survivors depends on your age when you die. No one needs more than 40 credits 10 years of work to be eligible for any Social Security benefit. But , the younger a person is , the fewer credits they must have for family members to receive survivors benefits. Benefits can be paid to your children and your spouse who is caring for the children even if you don't have the required number of credits. They can get benefits if you have credit for one and one - half years of work 6 credits in the three years just before your death. For Your Widow Or Widower \nThere are about five million widows and widowers receiving monthly Social Security benefits based on their deceased spouse's earnings record.", "Benefits Planner: Retirement \nOther Things to Consider \nWhat Is The Best Age To Start Your Benefits? The answer is that there is no one \" best age \" for everyone and, ultimately, it is your choice. You should make an informed decision about when to apply for benefits based on your individual and family circumstances. Your monthly benefit amount can differ substantially based on the age when you start receiving benefits. If you decide to start benefits : before your full retirement age , your benefit will be smaller but you will receive it for a longer period of time. at your full retirement age or later , you will receive a larger monthly benefit for a shorter period of time. The amount you receive when you first get benefits sets the base for the amount you will receive for the rest of your life. You may want to consider the following when you make that decision : If you plan to continue working , there are limits on how much you can earn each year between age 62 and full retirement age and still get all your benefits. Depending on the amount of your benefit and your earnings for the year , you may have to give up some of your benefits." ] ## convert query into a format as follows: ## user: {user}\nagent: {agent}\nuser: {user} formatted_query = '\n'.join([turn['role'] + ": " + turn['content'] for turn in query]).strip() ## get query and context embeddings query_input = tokenizer(formatted_query, return_tensors='pt') ctx_input = tokenizer(contexts, padding=True, truncation=True, max_length=512, return_tensors='pt') query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :] # (1, emb_dim) ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :] # (num_ctx, emb_dim) ## Compute similarity scores using dot product similarities = query_emb.matmul(ctx_emb.transpose(0, 1)) # (1, num_ctx) ## rank the similarity (from highest to lowest) ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_ctx) ``` ## Evaluations on Multi-Turn QA Retrieval Benchmark **(UPDATE!!)** We evaluate multi-turn QA retrieval on five datasets: Doc2Dial, QuAC, QReCC, TopiOCQA, and INSCIT, which can be found in the [ChatRAG Bench](https://huggingface.co/datasets/nvidia/ChatRAG-Bench). The evaluation scripts can be found [here](https://huggingface.co/nvidia/dragon-multiturn-query-encoder/tree/main/evaluation). ## License Dragon-multiturn is built on top of [Dragon](https://arxiv.org/abs/2302.07452). We refer users to the original license of the Dragon model. Dragon-multiturn is also subject to the [Terms of Use](https://openai.com/policies/terms-of-use). ## Correspondence to Zihan Liu ([email protected]), Wei Ping ([email protected]) ## Citation <pre> @article{liu2024chatqa, title={ChatQA: Surpassing GPT-4 on Conversational QA and RAG}, author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan}, journal={arXiv preprint arXiv:2401.10225}, year={2024}} </pre>
Victor26/QRIOA011_predictions
Victor26
2024-05-24T17:37:30Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-24T17:37:30Z
--- license: apache-2.0 ---
jayashreedevi2020/wav2vec2-large-xls-r-300m-assamese_speech_to_IPA
jayashreedevi2020
2024-05-24T17:35:15Z
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_11_0", "base_model:facebook/wav2vec2-xls-r-300m", "base_model:finetune:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-23T18:15:53Z
--- license: apache-2.0 base_model: facebook/wav2vec2-xls-r-300m tags: - generated_from_trainer datasets: - common_voice_11_0 metrics: - wer model-index: - name: wav2vec2-large-xls-r-300m-assamese_speech_to_IPA results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: common_voice_11_0 type: common_voice_11_0 config: as split: test args: as metrics: - name: Wer type: wer value: 0.5974643423137876 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-assamese_speech_to_IPA This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 1.0543 - Wer: 0.5975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:------:| | 4.4763 | 9.8765 | 400 | 1.0898 | 0.8007 | | 0.3692 | 19.7531 | 800 | 0.9617 | 0.6628 | | 0.1187 | 29.6296 | 1200 | 1.0302 | 0.5990 | | 0.0659 | 39.5062 | 1600 | 1.0543 | 0.5975 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Omriy123/vit_epochs5_batch32_lr5e-05_size224_tiles1_seed1_vit_lr
Omriy123
2024-05-24T17:34:21Z
222
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-24T17:20:05Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit_epochs5_batch32_lr5e-05_size224_tiles1_seed1_vit_lr results: - task: name: Image Classification type: image-classification dataset: name: Dogs_vs_Cats type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.5736 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch32_lr5e-05_size224_tiles1_seed1_vit_lr This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.6761 - Accuracy: 0.5736 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6952 | 1.0 | 469 | 0.6928 | 0.4971 | | 0.6736 | 2.0 | 938 | 0.6844 | 0.5552 | | 0.66 | 3.0 | 1407 | 0.6900 | 0.5523 | | 0.6937 | 4.0 | 1876 | 0.6770 | 0.5789 | | 0.6723 | 5.0 | 2345 | 0.6761 | 0.5736 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.1
idoco/MenakBERT
idoco
2024-05-24T17:34:07Z
0
0
null
[ "Transformers", "PyTorch", "token-classification", "he", "arxiv:2105.05209", "region:us" ]
token-classification
2024-05-20T07:00:02Z
--- language: - he pipeline_tag: token-classification tags: - Transformers - PyTorch --- <!-- Provide a quick summary of what the model is/does. --> ## MenakBERT A Hebrew BERT-style masked language model operating over characters, pre-trained by masking spans of characters, similarly to SpanBERT (Joshi et al., 2020). A Hebrew diacritizer based on a BERT-style char-level backbone. Predicts diacritical marks in a seq2seq fashion. ### Model Description This model is takes tau/tavbert-he and adds a three headed classification head that outputs 3 sequences corresponding to 3 types of Hebrew Niqqud (diacritics). It was finetuned on the dataset generously provided by Elazar Gershuni of Nakdimon. - **Developed by:** Jacob Gidron, Ido Cohen and Idan Pinto - **Model type:** Bert - **Language:** Hebrew - **Finetuned from model:** tau/tavbert-he <!-- ### Model Sources [optional] --> <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/jacobgidron/MenakBert <!-- - **Paper [optional]:** [More Information Needed] --> <!-- - **Demo [optional]:** [More Information Needed] --> ## Use The model expects undotted Hebrew text, that may contain numbers and punctuation. The output is three sequences of diacritical marks, corresponding with: 1. Dot distinguishing the letters Shin vs Sin. 2. The dot in the center of a letter that in some case changes pronunciation of certain letters, and in other cases creating a similar affect as an emphasis on the letter, or gemination. 3. All the rest of the marks, used mostly for vocalization. The length of each sequence is the same as the input - each mark corresponding with the char at the same possition in the input. The provided script weaves the sequences together. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ### Training Data The backbone tau/tavber-he was trained on OSCAR (Ortiz, 2019) Hebrew section (10 GB text, 20 million sentences). The fine tuning was done on the Nakdimon dataset, which can be found at https://github.com/elazarg/hebrew_diacritized and contains 274,436 dotted Hebrew tokens across 413 documents. For more information see https://arxiv.org/abs/2105.05209 <!-- #### Metrics --> <!-- These are the evaluation metrics being used, ideally with a description of why. --> <!-- [More Information Needed] --> <!-- ### Results --> <!-- [More Information Needed] --> ## Model Card Contact Ido Cohen - [email protected]
Zephyrus-H/annoyingball1
Zephyrus-H
2024-05-24T17:32:22Z
1
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:adapter:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-05-24T17:32:17Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base instance_prompt: annoyingball tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - annoyingball These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "annoyingball" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. Test prompt: an annoyingball is sitting on a table ![image_0](test_images/image_0.png) ![image_1](test_images/image_1.png) ![image_2](test_images/image_2.png) ![image_3](test_images/image_3.png)
nvidia/Llama3-ChatQA-1.5-70B
nvidia
2024-05-24T17:32:05Z
254
332
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nvidia", "chatqa-1.5", "chatqa", "llama-3", "pytorch", "conversational", "en", "arxiv:2401.10225", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-28T21:44:57Z
--- license: llama3 language: - en pipeline_tag: text-generation tags: - nvidia - chatqa-1.5 - chatqa - llama-3 - pytorch --- ## Model Details We introduce Llama3-ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augmented generation (RAG). Llama3-ChatQA-1.5 is developed using an improved training recipe from [ChatQA paper](https://arxiv.org/pdf/2401.10225), and it is built on top of [Llama-3 base model](https://huggingface.co/meta-llama/Meta-Llama-3-8B). Specifically, we incorporate more conversational QA data to enhance its tabular and arithmetic calculation capability. Llama3-ChatQA-1.5 has two variants: Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B. Both models were originally trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), we converted the checkpoints to Hugging Face format. **For more information about ChatQA, check the [website](https://chatqa-project.github.io/)!** ## Other Resources [Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) &ensp; [Evaluation Data](https://huggingface.co/datasets/nvidia/ChatRAG-Bench) &ensp; [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data) &ensp; [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) &ensp; [Website](https://chatqa-project.github.io/) &ensp; [Paper](https://arxiv.org/pdf/2401.10225) ## Benchmark Results Results in [ChatRAG Bench](https://huggingface.co/datasets/nvidia/ChatRAG-Bench) are as follows: | | ChatQA-1.0-7B | Command-R-Plus | Llama3-instruct-70b | GPT-4-0613 | GPT-4-Turbo | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B | | -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:| | Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 35.35 | 38.90 | 39.33 | 41.26 | | QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 40.10 | 41.82 | 39.73 | 38.82 | | QReCC | 46.97 | 49.77 | 51.34 | 52.01 | 51.46 | 48.05 | 49.03 | 51.40 | | CoQA | 76.61 | 69.71 | 76.98 | 77.42 | 77.73 | 78.57 | 76.46 | 78.44 | | DoQA | 41.57 | 40.67 | 41.24 | 43.39 | 41.60 | 51.94 | 49.60 | 50.67 | | ConvFinQA | 51.61 | 71.21 | 76.6 | 81.28 | 84.16 | 73.69 | 78.46 | 81.88 | | SQA | 61.87 | 74.07 | 69.61 | 79.21 | 79.98 | 69.14 | 73.28 | 83.82 | | TopioCQA | 45.45 | 53.77 | 49.72 | 45.09 | 48.32 | 50.98 | 49.96 | 55.63 | | HybriDial* | 54.51 | 46.7 | 48.59 | 49.81 | 47.86 | 56.44 | 65.76 | 68.27 | | INSCIT | 30.96 | 35.76 | 36.23 | 36.34 | 33.75 | 31.90 | 30.10 | 32.31 | | Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.03 | 54.14 | 55.17 | 58.25 | | Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 54.72 | 53.89 | 53.99 | 57.14 | Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 models use HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial. The data and evaluation scripts for ChatRAG Bench can be found [here](https://huggingface.co/datasets/nvidia/ChatRAG-Bench). ## Prompt Format **We highly recommend that you use the prompt format we provide, as follows:** ### when context is available <pre> System: {System} {Context} User: {Question} Assistant: {Response} User: {Question} Assistant: </pre> ### when context is not available <pre> System: {System} User: {Question} Assistant: {Response} User: {Question} Assistant: </pre> **The content of the system's turn (i.e., {System}) for both scenarios is as follows:** <pre> This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context. </pre> **Note that our ChatQA-1.5 models are optimized for the capability with context, e.g., over documents or retrieved context.** ## How to use ### take the whole document as context This can be applied to the scenario where the whole document can be fitted into the model, so that there is no need to run retrieval over the document. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "nvidia/Llama3-ChatQA-1.5-70B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") messages = [ {"role": "user", "content": "what is the percentage change of the net income from Q4 FY23 to Q4 FY24?"} ] document = """NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 28, 2024, of $22.1 billion, up 22% from the previous quarter and up 265% from a year ago.\nFor the quarter, GAAP earnings per diluted share was $4.93, up 33% from the previous quarter and up 765% from a year ago. Non-GAAP earnings per diluted share was $5.16, up 28% from the previous quarter and up 486% from a year ago.\nQ4 Fiscal 2024 Summary\nGAAP\n| $ in millions, except earnings per share | Q4 FY24 | Q3 FY24 | Q4 FY23 | Q/Q | Y/Y |\n| Revenue | $22,103 | $18,120 | $6,051 | Up 22% | Up 265% |\n| Gross margin | 76.0% | 74.0% | 63.3% | Up 2.0 pts | Up 12.7 pts |\n| Operating expenses | $3,176 | $2,983 | $2,576 | Up 6% | Up 23% |\n| Operating income | $13,615 | $10,417 | $1,257 | Up 31% | Up 983% |\n| Net income | $12,285 | $9,243 | $1,414 | Up 33% | Up 769% |\n| Diluted earnings per share | $4.93 | $3.71 | $0.57 | Up 33% | Up 765% |""" def get_formatted_input(messages, context): system = "System: This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context." instruction = "Please give a full and complete answer for the question." for item in messages: if item['role'] == "user": ## only apply this instruction for the first user turn item['content'] = instruction + " " + item['content'] break conversation = '\n\n'.join(["User: " + item["content"] if item["role"] == "user" else "Assistant: " + item["content"] for item in messages]) + "\n\nAssistant:" formatted_input = system + "\n\n" + context + "\n\n" + conversation return formatted_input formatted_input = get_formatted_input(messages, document) tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators) response = outputs[0][tokenized_prompt.input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### run retrieval to get top-n chunks as context This can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our [Dragon-multiturn](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) retriever which can handle conversatinoal query. In addition, we provide a few [documents](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B/tree/main/docs) for users to play with. ```python from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel import torch import json ## load ChatQA-1.5 tokenizer and model model_id = "nvidia/Llama3-ChatQA-1.5-70B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") ## load retriever tokenizer and model retriever_tokenizer = AutoTokenizer.from_pretrained('nvidia/dragon-multiturn-query-encoder') query_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-query-encoder') context_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-context-encoder') ## prepare documents, we take landrover car manual document that we provide as an example chunk_list = json.load(open("docs.json"))['landrover'] messages = [ {"role": "user", "content": "how to connect the bluetooth in the car?"} ] ### running retrieval ## convert query into a format as follows: ## user: {user}\nagent: {agent}\nuser: {user} formatted_query_for_retriever = '\n'.join([turn['role'] + ": " + turn['content'] for turn in messages]).strip() query_input = retriever_tokenizer(formatted_query_for_retriever, return_tensors='pt') ctx_input = retriever_tokenizer(chunk_list, padding=True, truncation=True, max_length=512, return_tensors='pt') query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :] ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :] ## Compute similarity scores using dot product and rank the similarity similarities = query_emb.matmul(ctx_emb.transpose(0, 1)) # (1, num_ctx) ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_ctx) ## get top-n chunks (n=5) retrieved_chunks = [chunk_list[idx] for idx in ranked_results.tolist()[0][:5]] context = "\n\n".join(retrieved_chunks) ### running text generation formatted_input = get_formatted_input(messages, context) tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators) response = outputs[0][tokenized_prompt.input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Correspondence to Zihan Liu ([email protected]), Wei Ping ([email protected]) ## Citation <pre> @article{liu2024chatqa, title={ChatQA: Surpassing GPT-4 on Conversational QA and RAG}, author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan}, journal={arXiv preprint arXiv:2401.10225}, year={2024}} </pre> ## License The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
AAProject/CodeLlama-7b-Instruct-hf-8bits-Q
AAProject
2024-05-24T17:29:22Z
79
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-05-24T02:07:04Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nvidia/Llama3-ChatQA-1.5-8B
nvidia
2024-05-24T17:28:49Z
10,912
555
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nvidia", "chatqa-1.5", "chatqa", "llama-3", "pytorch", "conversational", "en", "arxiv:2401.10225", "license:llama3", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-28T21:32:50Z
--- license: llama3 language: - en pipeline_tag: text-generation tags: - nvidia - chatqa-1.5 - chatqa - llama-3 - pytorch --- ## Model Details We introduce Llama3-ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augmented generation (RAG). Llama3-ChatQA-1.5 is developed using an improved training recipe from [ChatQA paper](https://arxiv.org/pdf/2401.10225), and it is built on top of [Llama-3 base model](https://huggingface.co/meta-llama/Meta-Llama-3-8B). Specifically, we incorporate more conversational QA data to enhance its tabular and arithmetic calculation capability. Llama3-ChatQA-1.5 has two variants: Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B. Both models were originally trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), we converted the checkpoints to Hugging Face format. **For more information about ChatQA, check the [website](https://chatqa-project.github.io/)!** ## Other Resources [Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B) &ensp; [Evaluation Data](https://huggingface.co/datasets/nvidia/ChatRAG-Bench) &ensp; [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data) &ensp; [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) &ensp; [Website](https://chatqa-project.github.io/) &ensp; [Paper](https://arxiv.org/pdf/2401.10225) ## Benchmark Results Results in [ChatRAG Bench](https://huggingface.co/datasets/nvidia/ChatRAG-Bench) are as follows: | | ChatQA-1.0-7B | Command-R-Plus | Llama3-instruct-70b | GPT-4-0613 | GPT-4-Turbo | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B | | -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:| | Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 35.35 | 38.90 | 39.33 | 41.26 | | QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 40.10 | 41.82 | 39.73 | 38.82 | | QReCC | 46.97 | 49.77 | 51.34 | 52.01 | 51.46 | 48.05 | 49.03 | 51.40 | | CoQA | 76.61 | 69.71 | 76.98 | 77.42 | 77.73 | 78.57 | 76.46 | 78.44 | | DoQA | 41.57 | 40.67 | 41.24 | 43.39 | 41.60 | 51.94 | 49.60 | 50.67 | | ConvFinQA | 51.61 | 71.21 | 76.6 | 81.28 | 84.16 | 73.69 | 78.46 | 81.88 | | SQA | 61.87 | 74.07 | 69.61 | 79.21 | 79.98 | 69.14 | 73.28 | 83.82 | | TopioCQA | 45.45 | 53.77 | 49.72 | 45.09 | 48.32 | 50.98 | 49.96 | 55.63 | | HybriDial* | 54.51 | 46.7 | 48.59 | 49.81 | 47.86 | 56.44 | 65.76 | 68.27 | | INSCIT | 30.96 | 35.76 | 36.23 | 36.34 | 33.75 | 31.90 | 30.10 | 32.31 | | Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.03 | 54.14 | 55.17 | 58.25 | | Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 54.72 | 53.89 | 53.99 | 57.14 | Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 models use HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial. The data and evaluation scripts for ChatRAG Bench can be found [here](https://huggingface.co/datasets/nvidia/ChatRAG-Bench). ## Prompt Format **We highly recommend that you use the prompt format we provide, as follows:** ### when context is available <pre> System: {System} {Context} User: {Question} Assistant: {Response} User: {Question} Assistant: </pre> ### when context is not available <pre> System: {System} User: {Question} Assistant: {Response} User: {Question} Assistant: </pre> **The content of the system's turn (i.e., {System}) for both scenarios is as follows:** <pre> This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context. </pre> **Note that our ChatQA-1.5 models are optimized for the capability with context, e.g., over documents or retrieved context.** ## How to use ### take the whole document as context This can be applied to the scenario where the whole document can be fitted into the model, so that there is no need to run retrieval over the document. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "nvidia/Llama3-ChatQA-1.5-8B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") messages = [ {"role": "user", "content": "what is the percentage change of the net income from Q4 FY23 to Q4 FY24?"} ] document = """NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 28, 2024, of $22.1 billion, up 22% from the previous quarter and up 265% from a year ago.\nFor the quarter, GAAP earnings per diluted share was $4.93, up 33% from the previous quarter and up 765% from a year ago. Non-GAAP earnings per diluted share was $5.16, up 28% from the previous quarter and up 486% from a year ago.\nQ4 Fiscal 2024 Summary\nGAAP\n| $ in millions, except earnings per share | Q4 FY24 | Q3 FY24 | Q4 FY23 | Q/Q | Y/Y |\n| Revenue | $22,103 | $18,120 | $6,051 | Up 22% | Up 265% |\n| Gross margin | 76.0% | 74.0% | 63.3% | Up 2.0 pts | Up 12.7 pts |\n| Operating expenses | $3,176 | $2,983 | $2,576 | Up 6% | Up 23% |\n| Operating income | $13,615 | $10,417 | $1,257 | Up 31% | Up 983% |\n| Net income | $12,285 | $9,243 | $1,414 | Up 33% | Up 769% |\n| Diluted earnings per share | $4.93 | $3.71 | $0.57 | Up 33% | Up 765% |""" def get_formatted_input(messages, context): system = "System: This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context." instruction = "Please give a full and complete answer for the question." for item in messages: if item['role'] == "user": ## only apply this instruction for the first user turn item['content'] = instruction + " " + item['content'] break conversation = '\n\n'.join(["User: " + item["content"] if item["role"] == "user" else "Assistant: " + item["content"] for item in messages]) + "\n\nAssistant:" formatted_input = system + "\n\n" + context + "\n\n" + conversation return formatted_input formatted_input = get_formatted_input(messages, document) tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators) response = outputs[0][tokenized_prompt.input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### run retrieval to get top-n chunks as context This can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our [Dragon-multiturn](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) retriever which can handle conversatinoal query. In addition, we provide a few [documents](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B/tree/main/docs) for users to play with. ```python from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel import torch import json ## load ChatQA-1.5 tokenizer and model model_id = "nvidia/Llama3-ChatQA-1.5-8B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") ## load retriever tokenizer and model retriever_tokenizer = AutoTokenizer.from_pretrained('nvidia/dragon-multiturn-query-encoder') query_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-query-encoder') context_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-context-encoder') ## prepare documents, we take landrover car manual document that we provide as an example chunk_list = json.load(open("docs.json"))['landrover'] messages = [ {"role": "user", "content": "how to connect the bluetooth in the car?"} ] ### running retrieval ## convert query into a format as follows: ## user: {user}\nagent: {agent}\nuser: {user} formatted_query_for_retriever = '\n'.join([turn['role'] + ": " + turn['content'] for turn in messages]).strip() query_input = retriever_tokenizer(formatted_query_for_retriever, return_tensors='pt') ctx_input = retriever_tokenizer(chunk_list, padding=True, truncation=True, max_length=512, return_tensors='pt') query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :] ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :] ## Compute similarity scores using dot product and rank the similarity similarities = query_emb.matmul(ctx_emb.transpose(0, 1)) # (1, num_ctx) ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_ctx) ## get top-n chunks (n=5) retrieved_chunks = [chunk_list[idx] for idx in ranked_results.tolist()[0][:5]] context = "\n\n".join(retrieved_chunks) ### running text generation formatted_input = get_formatted_input(messages, context) tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators) response = outputs[0][tokenized_prompt.input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Correspondence to Zihan Liu ([email protected]), Wei Ping ([email protected]) ## Citation <pre> @article{liu2024chatqa, title={ChatQA: Surpassing GPT-4 on Conversational QA and RAG}, author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan}, journal={arXiv preprint arXiv:2401.10225}, year={2024}} </pre> ## License The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
ashrafulparan/arbanking77
ashrafulparan
2024-05-24T17:27:51Z
110
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "banking77", "arbanking77", "arafinnlp", "ar", "dataset:SinaLab/ArBanking77", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-24T16:55:17Z
--- license: apache-2.0 datasets: - SinaLab/ArBanking77 language: - ar metrics: - f1 library_name: transformers tags: - banking77 - arbanking77 - arafinnlp --- ### Model Description This is finetuned version of Arabertv02 on arbanking77 dataset for ArafinNLP Task-1 ### Model Hyperparameters - Learning Rate (LR): 4e-2 - Weight Decay (WD): 0 - Warmup Steps (WS): 50 - Epochs (EP): 15
hgnoi/MnwbNphTqkySt2iq
hgnoi
2024-05-24T17:23:17Z
136
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T17:21:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ashrafulparan/whisper-small-bangla
ashrafulparan
2024-05-24T17:21:57Z
105
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "bengali.ai", "bengali", "bangla", "asr", "dialiect", "bn", "arxiv:1910.09700", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-05-24T17:15:49Z
--- license: apache-2.0 language: - bn metrics: - wer - cer tags: - bengali.ai - bengali - bangla - asr - dialiect --- # Model Card for Model ID This is a finetuned version of whisper-small trained on bengali.ai dataset for bengali regional dialects This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
alejandrohdez/textual_inversion_cat
alejandrohdez
2024-05-24T17:21:47Z
1
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "diffusers-training", "base_model:runwayml/stable-diffusion-v1-5", "base_model:adapter:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-24T10:04:49Z
--- license: creativeml-openrail-m library_name: diffusers tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion - diffusers-training - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion - diffusers-training base_model: runwayml/stable-diffusion-v1-5 inference: true --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Textual inversion text2image fine-tuning - alejandrohdez/textual_inversion_cat These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
erikops1/kdrno
erikops1
2024-05-24T17:20:46Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-24T17:20:46Z
--- license: apache-2.0 ---
sanjay920/phi-3-mini_run1
sanjay920
2024-05-24T17:20:32Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "phi3", "text-generation", "llama-factory", "freeze", "generated_from_trainer", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T17:18:33Z
--- license: other base_model: phi-3-mini/base_expanded tags: - llama-factory - freeze - generated_from_trainer model-index: - name: run1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # run1 This model is a fine-tuned version of [phi-3-mini/base_expanded](https://huggingface.co/phi-3-mini/base_expanded) on the fc-json_v2-version1 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.14.5 - Tokenizers 0.19.1
magnifi/phi-3-mini-4k-instruct-attribute-output-4-0524-epoch10
magnifi
2024-05-24T17:20:21Z
78
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T17:18:16Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** magnifi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
v-urushkin/MixRoBERTa-cl_sort-fin
v-urushkin
2024-05-24T17:17:09Z
108
0
transformers
[ "transformers", "safetensors", "roberta", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-24T15:25:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bartowski/Faro-Yi-9B-DPO-GGUF
bartowski
2024-05-24T17:13:57Z
226
4
null
[ "gguf", "text-generation", "en", "zh", "dataset:wenbopan/Chinese-dpo-pairs", "dataset:Intel/orca_dpo_pairs", "dataset:argilla/ultrafeedback-binarized-preferences-cleaned", "dataset:jondurbin/truthy-dpo-v0.1", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-24T16:55:45Z
--- language: - en - zh license: mit datasets: - wenbopan/Chinese-dpo-pairs - Intel/orca_dpo_pairs - argilla/ultrafeedback-binarized-preferences-cleaned - jondurbin/truthy-dpo-v0.1 pipeline_tag: text-generation quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of Faro-Yi-9B-DPO Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2965">b2965</a> for quantization. Original model: https://huggingface.co/wenbopan/Faro-Yi-9B-DPO All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Faro-Yi-9B-DPO-Q8_0.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-Q8_0.gguf) | Q8_0 | 9.38GB | Extremely high quality, generally unneeded but max available quant. | | [Faro-Yi-9B-DPO-Q6_K.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-Q6_K.gguf) | Q6_K | 7.24GB | Very high quality, near perfect, *recommended*. | | [Faro-Yi-9B-DPO-Q5_K_M.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-Q5_K_M.gguf) | Q5_K_M | 6.25GB | High quality, *recommended*. | | [Faro-Yi-9B-DPO-Q5_K_S.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-Q5_K_S.gguf) | Q5_K_S | 6.10GB | High quality, *recommended*. | | [Faro-Yi-9B-DPO-Q4_K_M.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-Q4_K_M.gguf) | Q4_K_M | 5.32GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Faro-Yi-9B-DPO-Q4_K_S.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-Q4_K_S.gguf) | Q4_K_S | 5.07GB | Slightly lower quality with more space savings, *recommended*. | | [Faro-Yi-9B-DPO-IQ4_NL.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-IQ4_NL.gguf) | IQ4_NL | 5.04GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Faro-Yi-9B-DPO-IQ4_XS.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-IQ4_XS.gguf) | IQ4_XS | 4.78GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Faro-Yi-9B-DPO-Q3_K_L.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-Q3_K_L.gguf) | Q3_K_L | 4.69GB | Lower quality but usable, good for low RAM availability. | | [Faro-Yi-9B-DPO-Q3_K_M.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-Q3_K_M.gguf) | Q3_K_M | 4.32GB | Even lower quality. | | [Faro-Yi-9B-DPO-IQ3_M.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-IQ3_M.gguf) | IQ3_M | 4.05GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Faro-Yi-9B-DPO-IQ3_S.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-IQ3_S.gguf) | IQ3_S | 3.91GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Faro-Yi-9B-DPO-Q3_K_S.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-Q3_K_S.gguf) | Q3_K_S | 3.89GB | Low quality, not recommended. | | [Faro-Yi-9B-DPO-IQ3_XS.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-IQ3_XS.gguf) | IQ3_XS | 3.71GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Faro-Yi-9B-DPO-IQ3_XXS.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-IQ3_XXS.gguf) | IQ3_XXS | 3.47GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Faro-Yi-9B-DPO-Q2_K.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-Q2_K.gguf) | Q2_K | 3.35GB | Very low quality but surprisingly usable. | | [Faro-Yi-9B-DPO-IQ2_M.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-IQ2_M.gguf) | IQ2_M | 3.09GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Faro-Yi-9B-DPO-IQ2_S.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-IQ2_S.gguf) | IQ2_S | 2.87GB | Very low quality, uses SOTA techniques to be usable. | | [Faro-Yi-9B-DPO-IQ2_XS.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-IQ2_XS.gguf) | IQ2_XS | 2.70GB | Very low quality, uses SOTA techniques to be usable. | | [Faro-Yi-9B-DPO-IQ2_XXS.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-IQ2_XXS.gguf) | IQ2_XXS | 2.46GB | Lower quality, uses SOTA techniques to be usable. | | [Faro-Yi-9B-DPO-IQ1_M.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-IQ1_M.gguf) | IQ1_M | 2.18GB | Extremely low quality, *not* recommended. | | [Faro-Yi-9B-DPO-IQ1_S.gguf](https://huggingface.co/bartowski/Faro-Yi-9B-DPO-GGUF/blob/main/Faro-Yi-9B-DPO-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Faro-Yi-9B-DPO-GGUF --include "Faro-Yi-9B-DPO-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Faro-Yi-9B-DPO-GGUF --include "Faro-Yi-9B-DPO-Q8_0.gguf/*" --local-dir Faro-Yi-9B-DPO-Q8_0 ``` You can either specify a new local-dir (Faro-Yi-9B-DPO-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
DeusImperator/Dark-Miqu-70B_exl2_2.4bpw
DeusImperator
2024-05-24T17:13:24Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2403.19522", "base_model:152334H/miqu-1-70b-sf", "base_model:merge:152334H/miqu-1-70b-sf", "base_model:Sao10K/Euryale-1.3-L2-70B", "base_model:merge:Sao10K/Euryale-1.3-L2-70B", "base_model:Sao10K/WinterGoddess-1.4x-70B-L2", "base_model:merge:Sao10K/WinterGoddess-1.4x-70B-L2", "base_model:sophosympatheia/Midnight-Rose-70B-v2.0.3", "base_model:merge:sophosympatheia/Midnight-Rose-70B-v2.0.3", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "exl2", "region:us" ]
text-generation
2024-05-24T14:24:33Z
--- base_model: - 152334H/miqu-1-70b-sf - sophosympatheia/Midnight-Rose-70B-v2.0.3 - Sao10K/Euryale-1.3-L2-70B - Sao10K/WinterGoddess-1.4x-70B-L2 library_name: transformers tags: - mergekit - merge license: other --- ![Dark-Miqu.png](Dark-Miqu.png) # Dark-Miqu-70B - EXL2 2.4bpw This is a 2.4bpw EXL2 quant of [jukofyork/Dark-Miqu-70B](https://huggingface.co/jukofyork/Dark-Miqu-70B) This quant was made using exllamav2-0.0.21 with default dataset and settings. This quant fits around 24k context on 24GB VRAM on Windows in my local testing (with exl2 Q4 cache), you might be able to get more depending on other things taking VRAM. I tested this quant shortly in some random RPs (including ones over 8k and 20k context) and it seems to work fine. ## Prompt Templates This model uses Mistral, Alpaca and Vicuna formats. For more details see original readme below ### Original readme below --- A "dark" creative writing model with 32k context. Based off [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) but with greatly reduced "positivity" and "-isms". If you want happy endings, look elsewhere! This model **excels** at writing Dark/Grimdark fantasy (see examples below). ***NOTE***: *This model has now been merged with [Dawn-Miqu-70B](https://huggingface.co/jukofyork/Dawn-Miqu-70B) to create [Deep-Miqu-103B](https://huggingface.co/jukofyork/Deep-Miqu-103B) and [Deep-Miqu-120B](https://huggingface.co/jukofyork/Deep-Miqu-120B).* ***NOTE***: *For a full range of GGUF quants kindly provided by @mradermacher, see: [Static](https://huggingface.co/mradermacher/Dark-Miqu-70B-GGUF) and [IMatrix](https://huggingface.co/mradermacher/Dark-Miqu-70B-i1-GGUF).* # Model background Created using [Mergekit](https://github.com/arcee-ai/mergekit) and based on @sophosympatheia's template for [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0). This model has a lower perplexity compared to [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) (`'4.08 +/- 0.02'` vs `'4.02 +/- 0.02'`). It also generates longer responses when prompted. The model was created in two stages: - First, three "Midnight-Miqu-esque" models were produced using spherical interpolation (slerp) merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and each of the following models: [Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3), [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) and [WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). These models were selected for their dark, imaginative writing styles. Various slerp-merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and other models were also experimented with, but these three yielded the darkest creative writing results. - In the second stage, the three slerp-merged models were combined into a single model using the '[Model Stock](https://arxiv.org/abs/2403.19522)' method, with [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) serving as the base model. # Prompting format Vicuna format is preferred: ``` USER: {prompt} ASSISTANT: ``` Mistral and Alpaca formats are also supported: ``` [INST] {prompt} [/INST] ``` ``` ### Instruction: {prompt} ### Response: ``` # Licence and usage restrictions [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only. # Mergekit configuration The following YAML configuration was used to produce this model: ```yaml name: midnight-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: sophosympatheia/Midnight-Rose-70B-v2.0.3 base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: euryale-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: Sao10K/Euryale-1.3-L2-70B base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: winter-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: Sao10K/WinterGoddess-1.4x-70B-L2 base_model: 152334H/miqu-1-70b-sf merge_method: slerp parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] embed_slerp: true tokenizer_source: model:miqu-1-70b-sf dtype: float16 --- name: dark-miqu-70b models: - model: 152334H/miqu-1-70b-sf - model: midnight-miqu-70b - model: euryale-miqu-70b - model: winter-miqu-70b base_model: 152334H/miqu-1-70b-sf merge_method: model_stock dtype: float16 ``` ## Key configuration details: - '`merge_method: slerp`' uses spherical linear interpolation for merging models. - '`parameters: t`' controls the interpolation ratios between models. - '`embed_slerp: true`' applies slerp to the embedding layers. - '`merge_method: model_stock`' uses the '[Model Stock](https://arxiv.org/abs/2403.19522)' method. See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings. **NOTE**: Run with `mergekit-mega` rather than `mergekit` as there are 4 documents in this one file. # Example stories The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0: ## Dark fantasy stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes ### ASSISTANT Chapter One: The Crimson Dawn The world was a cruel, unforgiving place, and the city of Vhyrd was its beating, blackened heart. Nestled in the crook of the Serpent's Spine mountains, it was a place where the sun struggled to shine, and the shadows held more sway than the law. It was a city of thieves, murderers, and worse, and it was home to the man who would become known as the Butcher of the Broken Docks. His name was Rennick, but that meant little to the people who feared him. He was a tall, broad-shouldered figure, with a face that had been carved by a drunken sculptor with a penchant for sharp angles and harsh lines. His eyes were a cold, flat grey, like the steel of his twin blades, and his hair was the color of dried blood. He moved through the crowded streets with a predator's grace, his cloak billowing behind him like a banner of death. Rennick had a job to do, and he was good at his job. He was a killer, a thief-taker, a man who dealt in the currency of pain and suffering. He had been hired to find a girl, a slip of a thing who had stolen something valuable from the wrong people. He didn't know what it was, and he didn't care. All that mattered was the coin that would jingle in his purse when he brought her back, preferably in several small, bloody pieces. He had tracked her to the Broken Docks, a maze of rotting wood and crumbling stone that clung to the edge of the city like a leech to a corpse. It was a place where the desperate went to die, and the desperate went to prey on the desperate. It was a place where the screams of the damned were drowned out by the cries of the gulls and the lap of the oily water against the pilings. Rennick moved through the shadows, his blades at the ready. He could feel the eyes on him, the whispers of his name on the wind. He reveled in it. Fear was a weapon, and he wielded it with the same skill as his swords. He could hear the girl's heartbeat, fast and frantic, like a rabbit's. She was close. He turned a corner and found himself in a narrow alley, the walls pressing in on him like the jaws of a trap. At the far end, he saw her, a small, trembling figure huddled in the doorway of a derelict building. She was young, no more than sixteen summers, with matted hair and ragged clothes. She looked up at him with wide, terrified eyes, and he could see the whites of her teeth as she bit her lip to keep from crying out. He advanced on her slowly, savoring the moment. He could almost taste the fear on her, like a fine wine. "Come now, lass," he said, his voice a low, dangerous purr. "You know you can't run from me. Give me what you took, and I might let you live." She shook her head, her eyes darting left and right, searching for an escape. Rennick smiled, a cold, cruel thing that didn't reach his eyes. "You're not thinking straight," he said. "You're just a child. You don't want to die here, in the gutter, like a rat." She spat at him, a glob of phlegm and hate that landed on his boot. He raised an eyebrow, impressed despite himself. "Feisty," he said. "I like that. It'll make this more fun." He lunged forward, his blades flashing in the dim light. She screamed and tried to run, but he was too fast. He caught her by the arm, his grip like iron, and yanked her back towards him. She kicked and struggled, but it was like trying to fight a storm. He slammed her against the wall, his forearm pressed against her throat, cutting off her air. "Where is it?" he growled, his breath hot and sour in her face. "Tell me, and I'll make it quick." She clawed at his arm, her nails leaving bloody furrows in his flesh. He didn't flinch. He was used to pain. He had been born in it, raised in it, bathed in it. It was his constant companion, his only friend. "I...I don't have it," she gasped, her voice raw and desperate. "I swear, I don't!" Rennick chuckled, a sound that sent shivers down her spine. "Lying to me is a bad idea, lass," he said. "I have ways of making people talk." He released her throat and slammed his fist into her stomach, doubling her over. She retched, but there was nothing in her stomach to come up. He grabbed a handful of her hair and yanked her head back, exposing her throat. He could see the pulse hammering in her neck, like a trapped bird trying to escape. "Tell me," he said, his voice a whisper. "Tell me, and I'll end this." She spat in his face, a defiant gesture that would be her last. He brought his blade up, the steel glinting in the moonlight, and sliced her throat from ear to ear. Blood sprayed across the cobblestones, warm and wet, and she slumped to the ground, her life draining away like water through a sieve. Rennick wiped the blood from his face and sheathed his blades. He didn't feel anything. No satisfaction, no remorse. Just the cold, empty void that had been his constant companion since the day he had taken his first life. He searched her body, but found nothing. He cursed under his breath. He would have to find the thing she had stolen the hard way. He turned to leave the alley, but something made him pause. A sound, barely audible over the cries of the gulls and the lapping of the waves. A soft, wet sound, like the patter of rain on a rooftop. He looked down and saw that the girl's body was twitching, her eyes rolling back in her head. And then, with a gurgling scream, she transformed. Her body contorted, bones snapping and reforming, flesh tearing and reshaping. In the space of a heartbeat, she was no longer human. She was something else, something monstrous. Rennick stared in horror as the thing that had been a girl rose to its feet, its eyes burning with an unholy light. It was a creature of nightmares, a twisted mockery of humanity, all teeth and claws and hate. It lunged at him, and he barely had time to draw his blades before it was on him. They fought, a dance of death in the narrow alley, steel ringing on steel, flesh tearing and blood spraying. Rennick was strong, and fast, and skilled, but the creature was stronger, faster, more vicious. It was like fighting a storm, a force of nature that could not be reasoned with or bargained with. It could only be endured. In the end, it was luck that saved him. He stumbled, his foot catching on a loose cobblestone, and the creature's claws raked across his chest, leaving deep, ragged gouges. But it overextended itself, and he was able to bring his sword up, driving it through its eye and into its brain. It fell to the ground, twitching and convulsing, and then it was still. Rennick stood over it, panting, his heart hammering in his chest. He had never seen anything like it. He had never even heard of anything like it. He knew then that he was in over his head. This was no ordinary job, no simple matter of retrieving a stolen trinket. This was something bigger, something darker. He was a small fish in a very large, very dangerous pond, and he was about to be eaten alive. But he was also a survivor. He had been born in the gutter, raised in the shadows, and he had clawed his way to the top of the food chain. He would not go down without a fight. He turned and ran, back towards the city, back towards the people who had hired him. He would find out what they knew, and he would make them pay for it. He would uncover the truth, no matter the cost. And so began the tale of the Butcher of the Broken Docks, a story of blood and betrayal, of monsters and men, of a city on the brink of chaos and the man who would either save it or destroy it. It was a story that would echo through the ages, a grim and terrible saga that would be whispered in the darkest corners of the world, a warning to all who would dare to challenge the forces that lurked in the shadows. It was a story that would end in fire and blood, in screams and tears, in the fall of kings and the rise of monsters. It was a story that had only just begun. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst. ### ASSISTANT The sun was a merciless beast, its fiery breath scorching the earth and turning the once-thriving town into a desolate wasteland. The cultist, his yellow robes now a dull, dusty hue, staggered through the empty streets, his vision blurred by a haze of pain and delirium. The arrow jutted from his shoulder like an accusing finger, a constant reminder of his betrayal. He clutched the small, intricately carved mirror in his trembling hand, its cool surface a comfort against the relentless heat. He had heard the whispers of the other acolytes, their hushed voices carrying tales of the town's demise like a plague on the wind. They spoke of a curse, of a darkness that had seeped into the very bones of the place, driving its inhabitants mad before claiming their souls. But he had seen the truth in the High Priest's eyes, the greed and lust for power that had led them here. The mirror was the key, they said, to unlocking the secrets of the ancient gods. But he knew better. He had seen the power it held, the way it twisted reality and bent it to the will of the one who wielded it. And he had taken it, stolen it from the very heart of the cult's temple, and run. Now, with each agonizing step, he questioned the wisdom of his decision. The desert stretched out before him, an endless sea of sand and despair, and he was but a single, insignificant speck in its vast expanse. He had hoped to find refuge here, a place to hide and heal before continuing his journey. But the town was as dead as the dreams he had once held of a better life. His thoughts were a jumbled mess, a cacophony of fear and pain. He could feel the infection spreading through his body, a writhing serpent of fire that threatened to consume him from within. His tongue was swollen, his lips cracked and dry, and his throat was a parched wasteland. He needed water, and soon, or he would join the ghosts that haunted this forsaken place. As he stumbled through the town, he caught glimpses of his reflection in the shattered windows of the abandoned buildings. The man who stared back at him was a stranger, his once-handsome face now a twisted mask of agony and desperation. He had been a fool to believe the lies of the cult, to think that he could ever be one of them. He was an outsider, a fraud, and they had seen it in him from the start. The mirror, though, the mirror was real. He could feel its power pulsing in his hand, a heartbeat that echoed the rhythm of his own. It was a cruel irony that the very thing he had risked everything to obtain was now his only hope for survival. He knew the stories, the legends of the mirror's healing properties, of how it could restore life to the dying. But he also knew the cost. The mirror demanded a sacrifice, a piece of the user's very soul in exchange for its gifts. He reached the town square, a desolate expanse of cracked cobblestones and withered plants. In the center stood a fountain, its once-clear waters now a stagnant pool of green slime. He fell to his knees before it, the pain in his shoulder a white-hot brand that seared his flesh. He dipped the mirror into the water, watching as the surface rippled and danced. "Please," he whispered, his voice hoarse and broken. "I'll give you anything. Just take the pain away." The mirror stared back at him, its surface as still and unyielding as the desert sands. He closed his eyes, willing the power within to answer his plea. And then, he felt it. A cold, insidious presence, slithering into his mind like a snake in the dark. It whispered to him, promising relief, promising power, promising everything he had ever wanted. He opened his eyes, and the world was different. The pain was gone, replaced by a cold, emotionless void. He looked down at his shoulder, and the arrow was gone, the wound nothing more than a faint scar. He laughed, a hollow, empty sound that echoed through the empty streets. "What have I done?" he asked the mirror, but it gave no answer. He stood, his legs unsteady beneath him, and looked out at the desert. He could feel the power coursing through his veins, a dark, intoxicating energy that threatened to consume him. He was no longer just a man, he was something more. Something dangerous. He took a step forward, and then another, the mirror clutched tightly in his hand. He would find the cult, he would make them pay for what they had done to him. And then, he would use the mirror to claim the power that was rightfully his. The wind picked up, whipping the sand into a frenzy, and he smiled. The desert was his now, and he would not be denied. He was the master of his own destiny, and the gods themselves would tremble at his approach. But as he walked, the laughter of the ghosts followed him, a chilling reminder of the price he had paid for his newfound strength. And deep within the mirror, something stirred, a darkness that watched and waited, biding its time. The first chapter of the grimdark trilogy, "The Shattered Reflection," had begun. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character. ### ASSISTANT The morning after the battle, the moor lay still, a shroud of mist clinging to its wounds like a lover's last embrace. The Mercer, a man of few words and fewer scruples, trudged through the carnage, his boots squelching in the mud churned red by the night's bloodshed. He was a reaper of sorts, his task to deliver the final mercy to those who had danced with death and lost. His name was a whisper among the living, a curse among the dying. He paused, his hand resting on the hilt of his blade, as a shiver ran down his spine. The mist swirled around him, carrying with it the cacophony of the night before: the screams of the dying, the clash of steel, the thunder of hooves. He closed his eyes, letting the memories wash over him like a wave of despair. *Why do I do this?* he thought, not for the first time. *Why do I wade through this sea of suffering, playing the part of the grim angel?* But he knew the answer. It was the same answer it had always been. He did it because he was good at it. Because he had no qualms about ending a life, even one that still clung to the faintest flicker of hope. Because, in this world of darkness and pain, it was the only thing he knew how to do. He opened his eyes, the mist parting to reveal a figure lying in the mud. A soldier, or what was left of one. The man's armor was shattered, his face a mask of agony. The Mercer knelt beside him, his movements practiced and efficient. He placed a hand on the soldier's forehead, feeling the fevered heat of his skin, the erratic thump of his heart. "Shh," he whispered, his voice a soothing balm. "It's over now." The soldier's eyes fluttered open, wide with terror. "Please," he croaked, "don't leave me here." The Mercer's lips twisted into a grim smile. "I won't," he said, drawing his blade. "I'll take you with me." As the soldier's life bled out onto the cold ground, the Mercer's mind drifted back to another time, another place. A time before the blood and the pain, before the endless cycle of war and death. He remembered a girl with hair like spun gold, laughing as she danced in a field of wildflowers. He remembered the feel of her hand in his, the sound of her voice as she whispered his name. *Elara.* The memory was a knife to his heart, a reminder of all he had lost. He pushed it away, focusing on the task at hand. There were more souls to reap, more lives to end. He moved through the mist, his blade a silent whisper in the stillness. The cries of the wounded guided him, a morbid symphony that played out across the moor. Each one he found, he dispatched with the same cold efficiency, the same hollow promise of release. As the sun began to rise, casting a sickly pallor over the scene, the Mercer came upon a group of soldiers huddled around a fire. They eyed him warily, their faces a mix of fear and respect. He was a necessary evil, a specter that haunted the battlefield. "Anyone here need my services?" he asked, his voice devoid of emotion. One of the soldiers, a young boy who couldn't have seen more than fifteen summers, pointed to a figure lying a few feet away. "Him," he said, his voice trembling. "He's not going to make it." The Mercer approached, his boots crunching on the frost-kissed grass. The man on the ground was a mess, his body a tapestry of wounds. He was barely conscious, his breathing shallow and ragged. *Another one,* the Mercer thought, his heart heavy with the weight of his duty. *Another soul to send into the void.* He knelt beside the man, his blade glinting in the pale light. The soldier's eyes flickered open, and for a moment, the Mercer saw something in them that he hadn't expected: defiance. "No," the man rasped, his hand feebly grasping at the Mercer's wrist. "I won't go quietly." The Mercer hesitated, a frown creasing his brow. He had never encountered this before. Most welcomed his touch, eager for the release it brought. But this man...this man was different. "Why?" he asked, genuinely curious. The soldier's lips curled into a bloody smile. "Because I have a son," he whispered. "And I want to see him grow up." The Mercer stared at the man, his mind racing. He had no children, no one to mourn him when his time came. He was a solitary figure, a ghost haunting the edges of life. But this man...this man had a reason to hold on. He made his decision. "I can't heal you," he said, his voice soft. "But I can make it quick." The soldier's eyes filled with tears. "Thank you," he breathed, his grip on the Mercer's wrist loosening. The Mercer raised his blade, the mist swirling around them like a mourner's veil. As he brought it down, he couldn't help but wonder: *What would it be like to have something to live for?* But the thought was fleeting, lost in the swirl of the mist and the cries of the dying. He was the Mercer, the bringer of mercy. And the moor was his to tend, until the next battle came and the cycle began anew. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation. ### ASSISTANT The stench of piss and shit was a constant companion in the damp stone cell, but it was the sound of the key turning in the lock that truly set his heart racing. The rusted iron groaned in protest, echoing through the narrow corridor like the dying breath of some ancient beast. The door creaked open, revealing the silhouette of the jailer, a hulking brute of a man with a lantern in one hand and a set of keys in the other. But there was no breakfast, no water, just the empty promise of another day in this hellhole. "What's the meaning of this?" the smaller man barked, his voice barely audible over the rattling of his chains. "Where's our food, you fat sack of shit?" The jailer didn't answer, just slammed the door shut and locked it again, the sound of his footsteps fading into the distance. The larger man, a hulking brute named Grimgor, grunted in response, his eyes never leaving the small patch of light that seeped through the barred window. "Something's not right," he rumbled, his voice like gravel being crushed under a heavy boot. The smaller man, known only as the Fox, couldn't argue with that. He'd been in this cell for what felt like an eternity, awaiting execution for a crime he didn't commit. But even in this place, there was a routine, a semblance of order. And that routine had just been shattered. As the day wore on, the silence from the corridor outside grew more oppressive. No guards, no other prisoners, nothing but the distant howl of the wind and the occasional scurrying of rats in the darkness. The Fox's mind raced, trying to piece together what could have happened. Had there been a riot? A rebellion? Or was it something else entirely? He glanced over at Grimgor, who was staring at the wall, lost in thought. The man was a monster, a notorious murderer and rapist, but he was also the only other living being in this godforsaken place. And in a world gone mad, even monsters could be allies. "You hear anything last night?" the Fox asked, trying to keep the tremor out of his voice. Grimgor grunted, his eyes still fixed on the wall. "Something about a threat from the North." The Fox's heart sank. He'd heard the same thing, snatches of conversation between the guards as they'd passed by their cell. But he'd been too afraid to give it much thought, too focused on his own impending doom. "What do you think it means?" he pressed, unable to keep the desperation at bay. Grimgor shrugged his massive shoulders. "Dunno. Don't care. Just means we're probably gonna die in here." The Fox's stomach twisted at the thought. He'd faced death before, many times, but this... this was different. To die slowly, trapped like an animal, with no chance to fight back... it was a fate worse than any he could imagine. As the hours dragged on, the thirst became unbearable. The Fox's tongue felt like sandpaper, his throat raw and parched. He could see the same desperation in Grimgor's eyes, the realization dawning on them both that they were truly alone. "We have to get out of here," he croaked, his voice barely above a whisper. Grimgor just grunted in agreement, his gaze never leaving the window. The Fox's mind raced, trying to come up with a plan. They had nothing, no tools, no weapons, just their wits and their will to survive. And even that seemed to be fading with each passing moment. But then, as the sun began to set and the shadows lengthened, he noticed something. The light from the window was changing, growing dimmer. He squinted, trying to make out what was happening. And then he saw it. "Grimgor," he hissed, tugging on the larger man's arm. "Look." Grimgor turned, his eyes narrowing as he followed the Fox's gaze. The light was flickering, casting strange shadows on the wall. And then, as if in answer to their unspoken prayers, they heard it. The sound of footsteps, growing louder and louder, accompanied by the jingle of keys. The Fox's heart leapt into his throat, hope and fear warring within him. Who was it? Friend or foe? The door swung open, revealing not the jailer, but a figure shrouded in darkness. The Fox couldn't make out any details, but he could sense the malice radiating off of him like heat from a forge. "Who are you?" he demanded, trying to sound braver than he felt. The figure stepped into the cell, the light from the lantern outside casting his features into sharp relief. He was a tall man, with a cruel smile and eyes that glinted like ice. "I am your salvation," he said, his voice like a knife slicing through the silence. "And your doom." With that, he tossed a set of keys at the Fox's feet and disappeared back into the shadows. The door slammed shut behind him, the sound of the lock clicking into place echoing through the cell like a death knell. The Fox stared at the keys, his heart pounding in his chest. Freedom was within his grasp, but at what cost? He looked over at Grimgor, who was already working on his own chains. "We have to get out of here," the larger man growled. "Before whatever's coming from the North gets here." The Fox nodded, his hands shaking as he picked up the keys. They were cold, colder than they had any right to be. He could feel the weight of them, the promise of escape and the threat of the unknown. Together, they worked in silence, the only sound the rattle of chains and the pounding of their hearts. When they were finally free, they stood for a moment, staring at each other. "What now?" the Fox asked, his voice barely above a whisper. Grimgor's smile was feral, his eyes alight with a madness that sent a shiver down the Fox's spine. "Now," he said, "we find out what's really going on." And with that, they set off into the darkness, their footsteps echoing through the empty corridors like the march of the damned. </details> ## Dark sci-fi stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan. ### ASSISTANT Chapter 1: Adrift The colony ship, once a beacon of hope named Elysium's Dawn, now drifted through the cold void like a ghost ship. Its hull, pitted and scarred from the debris of uncharted space, groaned with each labored breath it took. The once vibrant corridors were now dimly lit, casting long shadows that seemed to swallow any remaining semblance of optimism. In one such shadow, a figure moved silently, her bare feet making no sound on the cold metal floor. She was small for her age, malnourished from the rationing that had become a way of life. Her ragged clothes hung loosely on her frame, but her eyes were sharp and alert, scanning every corner for danger. This was Kira, an orphan of the void, surviving in a world where hope was a luxury few could afford. Kira's thoughts drifted back to the stories she had heard as a child, tales of a green and blue planet called Earth, their home that they would never see again. The ship had been pulled off course by an unknown force, leaving them stranded in the vast emptiness between stars. The adults spoke in hushed tones about the dwindling resources, the failing life support systems, and the ever-present threat of pirates who preyed on lost ships like theirs. She pushed those thoughts aside as she approached her destination: the hydroponics bay. It was one of the few places on the ship where life still thrived, albeit in a controlled and desperate manner. The air here was thick with moisture and the scent of growing things, a stark contrast to the stale recycled air that permeated the rest of the vessel. As she entered, the soft glow of the grow lights illuminated rows upon rows of plants, struggling to produce enough food for the dwindling population. Kira's stomach growled at the sight, but she knew better than to take more than her share. She had learned early on that stealing could mean a swift and brutal punishment from those who guarded the bay. Instead, she found solace in tending to the plants, her small hands deftly pruning and watering them. It was here that she felt closest to something resembling peace, a connection to life that transcended the harsh reality of their situation. As she worked, her mind wandered to the lessons she had learned from the ship's AI, the only constant presence in her life since the adults had given up on teaching the children anything but survival skills. "Kira," a voice whispered urgently, jolting her out of her reverie. She turned to see Jax, another orphan and one of the few people she trusted. His face was pale and his eyes wide with fear. "You need to come with me, now." She followed him through the maze-like corridors, her heart pounding in her chest. They arrived at a small, hidden room that served as their makeshift home. Inside, huddled around a flickering screen, were several other children, their faces etched with worry and excitement. "Look," Jax said, pointing to the screen. It displayed a grainy image of a man in tattered clothing, his face half-hidden by a wild beard. "He says he knows how to fix the engines." Kira's eyes widened in disbelief. The engines had been dead for as long as she could remember, their failure sealing their fate as interstellar castaways. "Who is he?" she asked, her voice barely above a whisper. "They call him the Mechanic," one of the older girls replied. "He used to work in engineering before...before everything fell apart." The man on the screen continued his frantic plea, detailing his plan to restore power and steer the ship towards a habitable planet he claimed was within reach. Kira felt a spark of hope ignite within her, but it was quickly doused by the skepticism of the others. "It's just another one of their lies," spat a boy named Taro. "They'll say anything to get us to do their dirty work." Jax rounded on him, his fists clenched. "This is our chance, Taro! We can't just give up!" The argument raged on, but Kira tuned it out, her thoughts racing. She knew the risks of trusting an adult, especially one who claimed to have access to forbidden knowledge. But what if he was telling the truth? What if they could find a new home and start over? As she watched the debate unfold, she made up her mind. She would seek out this Mechanic and offer him her help, no matter the cost. It was a decision born of desperation, but also of a stubborn refusal to accept the bleak future that lay before them. And so, as the other children slept fitfully around her, Kira slipped out into the dark corridors once more, determined to find the man who might hold the key to their salvation or doom. The ship groaned and creaked around her, a constant reminder of the fragile existence they all clung to. But she pressed on, driven by an unyielding will to survive and the faintest glimmer of hope that burned within her heart. As she navigated the treacherous path towards engineering, she couldn't help but wonder what awaited her in the depths of the ship. Would she find a savior or a madman? And more importantly, would she be able to make it back alive? </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in. ### ASSISTANT Chapter 1: Ashes of Exodus The cold embrace of space enveloped the station like a lover's caress, its infinite blackness punctuated only by the distant twinkle of stars and the ever-present glow of Earth below. Captain Marcus Kane floated through the dimly lit corridors, his magnetic boots barely whispering against the metal floor as he made his way to the observation deck. The daily routine had become a meditative ritual for him, a chance to clear his mind before the cacophony of the day's work began. As he entered the room, the sight that greeted him never failed to take his breath away. Earth, a swirling marble of blues and greens, hanging in the void like an irreplaceable jewel. He couldn't help but feel a pang of longing for its familiar embrace, for the weight of gravity that he had left behind so many years ago. Marcus settled into his favorite spot by the window, the worn leather of the seat molded to his body like an old friend. He closed his eyes and let the silence wash over him, the only sound the soft hum of the station's life support systems. It was in these moments that he felt truly alive, connected to something greater than himself. But today, the silence was shattered by a chilling alarm, its shrill wail piercing the tranquility like a knife through glass. His eyes snapped open as his heart began to race, adrenaline coursing through his veins. He knew that sound all too well; it was the emergency alert for an incoming transmission from Earth. He launched himself towards the comms console, fingers flying over the keys as he tried to make sense of the garbled message. The voice on the other end was frantic, barely coherent through the static. "Multiple...detonations...global catastrophe..." were the only words he could decipher before the signal died completely. Marcus's mind raced as he tried to process what he had just heard. It couldn't be true, it couldn't be happening. Not again. He quickly activated the external cameras, his stomach twisting into knots as he waited for the feed to come online. And then he saw them: mushroom clouds, blooming like grotesque flowers across the planet's surface. Too many to count, their plumes of smoke and ash reaching up towards the heavens in a silent scream of despair. The sight was both beautiful and horrifying, a testament to humanity's capacity for destruction on an unimaginable scale. His thoughts were interrupted by the sound of footsteps behind him as the rest of the crew floated into the room, drawn by the alarm. Their faces mirrored his own shock and disbelief as they stared at the monitors, their voices a cacophony of whispers and gasps. "What...what is that?" asked Dr. Amelia Chen, her voice trembling with fear. Marcus didn't answer; he couldn't find the words. Instead, he simply pointed at the screen, his hand shaking slightly. Engineer Jake Taylor was the first to break the silence. "Nuclear war," he said, his voice flat and emotionless. "It's finally happened." The others looked at him in horror, but Marcus knew he was right. There could be no other explanation for such devastation. He felt a cold weight settle in the pit of his stomach as the reality of their situation began to sink in. They were alone, adrift in the vastness of space with no hope of return. As the implications of what they had just witnessed set in, the crew fell into an uneasy silence, each lost in their own thoughts. Marcus couldn't help but think about his wife and daughter back home, wondering if they were safe or if they too had been consumed by the flames. He knew it was a futile exercise, but he couldn't stop himself from hoping against hope that somehow, someway, they had survived. But as the minutes ticked by and no further communication came from Earth, that hope began to fade like a dying ember. They were truly alone now, the last remnants of humanity clinging to life on this tiny outpost in the void. He turned to face his crew, their faces etched with fear and uncertainty. "We need to come up with a plan," he said, his voice steady despite the turmoil raging within him. "We can't just sit here and wait for...for whatever comes next." Amelia nodded, her eyes hardening with determination. "I'll start running diagnostics on our life support systems, see how long we can last up here." Jake followed suit, his hands already moving to the controls of the station's mainframe. "I'll try to establish contact with any other surviving outposts or spacecraft." Marcus nodded, grateful for their resolve in the face of such overwhelming odds. He knew that they were all thinking the same thing: that this could very well be the end of everything they had ever known. But he also knew that they couldn't give up, not yet. As he watched them work, his mind drifted back to the message from Earth, those final desperate words echoing in his head like a funeral dirge. He couldn't shake the feeling that there was something more, some hidden meaning or clue that would explain what had happened. And then it hit him: the timing of the detonations, the pattern of the blasts. It wasn't random; it was deliberate, almost...calculated. But who would do such a thing? And why? The questions swirled in his mind like a storm, each one more unsettling than the last. He knew that they needed answers, and fast. Because if this was an act of war, then their isolation might not be as secure as they thought. He pushed himself away from the console, his decision made. "I'm going to suit up," he said, his voice firm. "We need to get a closer look." The others stared at him in disbelief, but he didn't waver. This was his duty as captain, to protect his crew and find out the truth no matter how terrible it might be. As he floated towards the airlock, Marcus couldn't help but feel a sense of dread creeping up on him like a shadow. He knew that what he was about to see would change everything, that there was no going back once he stepped outside and gazed upon the ashes of humanity's final stand. But he also felt something else: a burning anger, a fierce determination to uncover the truth and make those responsible pay for their crimes against humanity. Whatever awaited them out there, they would face it together, as one. He closed his eyes and took a deep breath, steeling himself for what was to come. And then, with a quiet prayer on his lips, he opened the airlock and stepped into the void. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here... ### ASSISTANT Chapter 1: The Biodome The air was thick with the scent of sweat and burnt metal as Kaelen, a seasoned construction worker, tightened the last bolt on the massive irrigation system. He wiped his brow with the back of his gloved hand, leaving behind a smear of grease that glistened under the harsh artificial lights. The hum of machinery and distant shouts echoed through the cavernous space, a cacophony that had become as familiar to him as his own heartbeat over the years spent building biodomes on this desolate planet. Kaelen's thoughts drifted to his family back on Earth, their faces faded like old photographs in his mind. He wondered if they ever thought about him, or if he was just another ghost haunting the fringes of their memories. The company had promised them a better life, but it came at a steep price: years of backbreaking labor on an alien world, far from everything they'd ever known. "Hey, Kael!" A voice cut through his reverie, jolting him back to reality. It was Tamsin, his foreman, her face hidden behind a tinted visor. "We need you up top! There's an issue with the atmospheric seal." He nodded curtly and began the long climb up the scaffolding, each rung biting into his calloused hands. As he ascended, Kaelen couldn't help but marvel at the sheer scale of their creation: a vast dome of steel and glass that would one day be teeming with life, a self-sustaining ecosystem in the heart of this barren wasteland. But today was not that day. Today, it was just another tomb waiting to be sealed. As he reached the top, Kaelen could see the problem immediately: a small fissure had formed along one of the joints, spewing precious oxygen into the void beyond. He cursed under his breath; they were already behind schedule and over budget. Another delay would mean another round of demerits, another month's pay docked. "What do you think?" Tamsin asked, her voice crackling through his earpiece. "Can we patch it up or do we need to call in the engineers?" Kaelen hesitated, running his fingers along the jagged edge of the tear. It was larger than he'd initially thought, and growing by the second. He could feel the cold tendrils of vacuum reaching out to claim him, whispering promises of oblivion. "I... I don't know," he admitted, his voice heavy with dread. "It doesn't look good." Tamsin swore colorfully and turned away, barking orders into her comm unit. Kaelen watched as workers scrambled to gather tools and materials, their movements frantic and disorganized. He knew they were all thinking the same thing: if they couldn't fix this, they were dead. The air around them grew colder, thinner, as the oxygen continued to escape. Kaelen's lungs burned with every breath, his vision swimming at the edges. He fumbled with the patch kit, his hands shaking uncontrollably. This was it; this was how he would die, millions of miles from home, in service to a corporation that saw him as nothing more than a replaceable cog in their grand machine. "Hurry up!" Tamsin shouted over the growing din. "We're losing pressure fast!" Kaelen's heart pounded in his chest like a sledgehammer, drowning out all other sound. He could feel the panic rising within him, threatening to consume him whole. But he couldn't afford to give in; not now, not when so much was at stake. With trembling hands, he applied the sealant and pressed the patch into place. For a moment, it seemed to hold... but then, with a sickening lurch, the fissure widened, swallowing the feeble attempt whole. The wind howled around them like a ravenous beast, tearing at their suits, trying to pull them apart atom by atom. "Abort!" Tamsin screamed, her voice barely audible over the roar. "Everyone get out now!" But it was too late. The dome shuddered beneath their feet, groaning in protest as the atmosphere inside rushed to equalize with the void outside. Kaelen felt himself being lifted off the ground, his boots scrabbling for purchase on the slick metal surface. He reached out, desperate to grab onto something - anything - but found only emptiness. And then, as suddenly as it had begun, there was silence. A deafening, suffocating silence that pressed in on him from all sides. His vision went dark, and he knew no more. When Kaelen awoke, he was lying on the cold floor of the dome, his body wracked with pain. He tried to move, but found himself held fast by twisted metal and shattered glass. Panic surged through him once again as he realized that his suit had been breached; he could feel the icy chill of vacuum seeping into his very bones. Through the haze of agony, he became aware of movement nearby. Tamsin was there, her visor cracked and fogged, struggling to free herself from the wreckage. She looked at him with wide, terrified eyes, her lips moving soundlessly. He knew what she was saying: they were going to die here, alone and forgotten. Kaelen closed his eyes, willing himself to accept the inevitable. But even as the darkness closed in, he couldn't shake the feeling that this wasn't just an accident - that there were forces at work far beyond their understanding, manipulating them like pawns on a cosmic chessboard. And then, just when he thought it was all over, he heard it: a faint beeping, growing louder by the second. His eyes snapped open to see a small drone hovering above them, its lights flashing urgently. It was one of the company's rescue bots, sent to assess the damage and retrieve any survivors. But there would be no salvation for Kaelen or Tamsin. Their suits were too damaged, their bodies too far gone. As the drone's camera lens bore down on him, he could only manage a weak, bitter smile. "Tell my family... tell them I'm sorry," he croaked, his voice barely audible even to himself. Tamsin reached out a gloved hand, her fingers brushing against his own in a final gesture of solidarity. And then, as the darkness claimed him once more, Kaelen knew that this was not the end of their story - but rather, the beginning of something far more sinister. Something that would change the course of human history forever... </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone. ### ASSISTANT Chapter 1 - Whispers from the Void The silence of deep space was like an old friend to Captain Alistair Reynolds, a constant companion that had accompanied him through countless missions and lonely nights at the helm of the listening station. But today, it felt more like a suffocating shroud, heavy with unspoken secrets and foreboding. He sat in his chair, fingers drumming impatiently on the console as he scanned the endless stream of data pouring in from the far reaches of the cosmos. His thoughts drifted to home, to the warmth of his wife's embrace and the laughter of their children playing in the garden under the twin suns of Proxima Centauri. It had been years since he last felt that warmth, since he last heard those laughs. The ache was always there, a dull throb that pulsed in time with his heartbeat, but today it seemed to echo through the cold metal halls of the station like a mournful dirge. A sudden blip on the screen snapped him back to reality. He leaned forward, eyes narrowing as he studied the readout. It was faint, barely more than a whisper against the static, but it was there - a signal, human in origin and heavily encrypted. His heart raced as he keyed in the decryption sequence, his mind already racing ahead to the possibilities of what this could mean. The message unscrambled, revealing a frantic voice, barely audible over the crackle of interference. "This is the UES Normandy, requesting immediate assistance... coordinates attached... under attack by unknown forces... God help us..." The transmission cut out abruptly, leaving only the empty hiss of static in its wake. Alistair's blood ran cold as he stared at the screen, the weight of the silence pressing down on him like a physical force. He knew that ship, had served with her captain back during the Martian uprising. They were old friends, bound by shared battles and the scars they bore from them. And now she was out there, alone and in danger. He didn't hesitate. His fingers flew over the console, sending out an emergency beacon to the nearest fleet. But he knew it would take time for help to arrive, time that the Normandy might not have. He had to do something now. As he prepared to launch a probe towards the coordinates, his mind filled with images of burning ships and desperate faces. He could almost hear the screams of the dying, echoing through the void like ghosts from his past. It was a sound that haunted him still, one he had hoped never to hear again. But duty called, and Alistair was nothing if not a soldier. He took a deep breath, steadying himself against the tide of fear and doubt that threatened to overwhelm him. This was his job, his purpose - to listen, to bear witness, and when necessary, to act. The probe shot out into the darkness, its tiny form swallowed up by the vastness of space. He watched it go, feeling a strange sense of detachment as if he were watching someone else's life play out before him. And perhaps in some ways, he was. For all his years in service, this was the first time he had ever been truly alone, cut off from the rest of humanity by light-years and the cold indifference of the universe. As the minutes ticked by, Alistair found himself slipping into a half-trance, his thoughts drifting back to happier times. He remembered the first time he had held his newborn son, the tiny fingers curling around his own as if trying to hold on for dear life. He remembered the taste of real Earth coffee, rich and bitter, shared with his wife under a sky full of stars that seemed close enough to touch. But those memories were distant now, fading like echoes in the void. All he had was this station, this endless vigil, and the hope that one day, he might make it back home. The radio crackled to life again, jolting him out of his reverie. It was the probe, transmitting data back from the edge of nowhere. He braced himself for what he would find, knowing that it could change everything. What greeted him on the screen was worse than anything he could have imagined. The Normandy was there, alright - or rather, what was left of her. She drifted through space like a broken toy, her hull breached and venting atmosphere into the cold void. There were no signs of life, no distress signals, nothing but silence. Alistair felt a cold knot form in his stomach as he took in the wreckage. This wasn't just any ship; this was his friend's ship, her crew now lost to the void. And the attackers... whoever or whatever they were, they had left no trace behind. He sat back in his chair, the weight of it all settling heavily upon him. He knew what he had to do next - file a report, alert command, and wait for further instructions. But as he stared at the lifeless husk that was once the Normandy, he couldn't shake the feeling that something far greater than himself was at work here. Something dark and terrible had come calling, whispering its secrets through the stars. And now it was his turn to listen, to bear witness, and perhaps, if he was lucky, to survive long enough to tell the tale. </details> ## Miscellaneous stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today. ### ASSISTANT In the dim light of the bunker, the technician sat hunched over his console, fingers dancing across the keys with a practiced urgency. The air was thick with the hum of machinery and the stench of sweat and fear. He had been down here for what felt like an eternity, but the digital clock on the wall insisted it had only been eighteen months since the world above had turned to ash. His name was Samuel, or at least that's what they called him now. In another life, he had been a software engineer, designing algorithms for self-driving cars and smart homes. Now, he was responsible for keeping twelve souls alive in this underground tomb, their last bastion against the desolation above. He glanced up at the monitor, the green glow illuminating his haggard face. The readings were clear: salted cobalt bombs had been used. The thought sent a shiver down his spine. They had all heard the rumors, of course, but they had clung to hope like a drowning man to driftwood. Hope that the war hadn't been as bad as they feared, that the surface would be habitable again soon. But now, the truth was undeniable. The world outside was dead, and it would remain so for generations. Samuel closed his eyes, trying to block out the images of a scorched earth, devoid of life. He could hear the others in the background, their voices a low murmur as they went about their daily tasks. They didn't know yet. How could he tell them? How could he be the one to snuff out the last ember of hope that had kept them going for so long? He took a deep breath and stood up, his joints creaking in protest. He made his way through the narrow corridors, past the rows of bunks where some slept fitfully, their dreams haunted by memories of a world they would never see again. Others were awake, their faces etched with lines of worry and despair. They looked up as he passed, but he couldn't meet their gazes. He reached the command center, where Captain Reynolds was poring over maps and reports. The man looked up as Samuel entered, his eyes heavy with exhaustion. "What is it?" he asked, his voice gruff. Samuel hesitated for a moment before speaking. "Sir, I have some... troubling news." The captain's expression didn't change, but Samuel could see the tension in his jaw. He had been expecting this, too. "Go on," he said simply. "The radiation readings... they're off the charts. It's cobalt-60. The half-life is over five years." Reynolds let out a slow breath, his eyes never leaving Samuel's face. "And our supplies?" "We have enough for six, maybe seven if we ration carefully. But not twelve." The room seemed to shrink around them as the weight of their situation settled in. The captain rubbed his temples, his thoughts racing. They had always known that their time in the bunker was limited, but they had been so close. So very close. "We'll need to make a decision," he said finally, his voice barely above a whisper. "Who stays, and who goes." Samuel felt a knot form in his stomach. He knew what that meant. It was survival of the fittest down here now, and there were those among them who would do anything to ensure their own survival. As he made his way back to his console, he couldn't help but think about the world above. The trees that would never grow again, the birds that would never sing. He thought of his wife and daughter, their faces fading like photographs left in the sun. They had been lucky, they said, to have made it into the bunker. But now, he wasn't so sure. In the mess hall, the atmosphere was tense. Whispers and hushed conversations filled the air as people tried to make sense of what they had just heard. Samuel sat alone at a table, staring blankly at his food. He couldn't eat. Not now. He felt a hand on his shoulder and looked up to see Maria, the doctor. Her eyes were red-rimmed, but her voice was steady. "We need to talk," she said, guiding him towards the infirmary. Once inside, she closed the door behind them and leaned against it, her body trembling. "What are we going to do?" she asked, her voice cracking. Samuel shook his head. "I don't know." "We can't just let six of us die," she said, her voice rising in panic. "There has to be another way." He looked at her, at the desperation etched on her face, and felt a flicker of anger. "What do you suggest? We all hold hands and sing Kumbaya while we starve to death?" Maria's eyes flashed with anger, but she quickly composed herself. "No," she said softly. "But we can't just give up." Samuel sighed and ran a hand through his greasy hair. "I'm not giving up," he said. "I'm just being realistic." She looked at him for a long moment before nodding. "You're right," she said, her voice barely audible. "We need to be realistic." They sat in silence for a while, each lost in their own thoughts. Then, Maria spoke up again. "There is one option," she said hesitantly. "But it's... drastic." Samuel raised an eyebrow. "What?" "We could... we could draw straws," she said, her voice barely above a whisper. "Choose six people at random to leave the bunker and take their chances on the surface." He stared at her in disbelief. "You can't be serious." "It's not ideal," she admitted. "But it's better than just letting them die down here." Samuel shook his head, his mind reeling. He couldn't fathom the thought of sending people out into that wasteland, knowing they would likely never return. But what other choice did they have? As he sat there, the weight of their decision pressing down on him like a leaden shroud, he heard a commotion outside. Shouting, followed by the sound of scuffling feet and something heavy being dragged across the floor. He exchanged a worried glance with Maria before rushing out into the corridor. What he saw made his blood run cold. A group of men had cornered one of the younger women, their faces twisted with rage and desperation. They were arguing over who would get to leave the bunker, who would get a chance at life. Fists flew, and soon, the sound of gunfire echoed through the halls. Samuel stood there, frozen in place as chaos erupted around him. This was it. The end of everything they had worked so hard to preserve. He could feel the darkness closing in, suffocating him, as he realized that their last hope for survival had just been shattered like glass under a bootheel. And all he could do was watch as the world above claimed them, one by one. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods. ### ASSISTANT Chapter One: The Golden Veil The year was 1848, and California was a land of dreams and desperation. Men from all corners of the world had flocked to its golden shores, lured by the promise of wealth beyond their wildest imaginations. They came with pickaxes and pans, with hope in their hearts and greed in their eyes, ready to tear apart the earth itself in search of fortune. In the midst of this frenzy stood a solitary figure, a man who seemed out of place amidst the chaos. He was tall and lean, with a face that bore the lines of a thousand stories. His clothes were simple but well-made, his boots worn from years of travel. He moved with an air of quiet authority, as if he belonged to another time entirely. His name was Alistair Blackwood, and he had come to California not for gold, but for something far more valuable: knowledge. A scholar by trade and an adventurer at heart, Blackwood had spent his life chasing after ancient texts and forgotten lore, seeking to unravel the secrets of the universe. And now, he believed he was on the verge of a discovery that would change everything. Rumors had reached him of a strange artifact, said to have been found deep within the Sierra Nevada mountains. It was a small, intricately carved box made of an unknown metal, its surface etched with symbols that defied translation. Those who claimed to have seen it spoke in hushed tones of its otherworldly beauty and the eerie sense of power that seemed to emanate from within. Blackwood had tracked the rumors to a dusty saloon in Sacramento, where he hoped to find answers among the grizzled miners who frequented the place. As he pushed open the swinging doors, the din of voices and clinking glasses washed over him like a wave. He scanned the room, his gaze settling on a group of men huddled around a table in the corner. One look at their faces told him they were the ones he sought: sun-weathered and unshaven, their eyes glinting with a mixture of excitement and fear as they whispered amongst themselves. He approached them slowly, aware that he was an outsider in this rough-and-tumble world. "Gentlemen," he said, his voice low and cultured, "I couldn't help but overhear your conversation. Might I be so bold as to ask what has you all so...animated?" The men exchanged wary glances before one of them spoke up. "You ain't from around here, are ya?" he drawled, eyeing Blackwood's fine clothes with suspicion. "No," Blackwood admitted, "I am not. But I have traveled far and wide in search of knowledge, and I believe you may possess something that could be of great interest to me." He reached into his pocket and produced a small leather pouch, which he placed on the table with a soft thud. The men's eyes widened as they saw the glint of gold within. "I understand there is an object - a box, perhaps? - that has recently come into your possession. I would be willing to pay handsomely for any information you might have about it." The miners looked at each other, then back at Blackwood. Finally, the one who had spoken before nodded slowly. "We might know somethin' 'bout that," he said, his voice dropping to a conspiratorial whisper. "But it ain't no ordinary box." As they began to tell their tale, the saloon faded away around them, replaced by images of dark caverns and glittering treasure. Blackwood leaned forward, his heart pounding with anticipation. This was it - the moment he had been waiting for. The box, they said, had been found buried beneath a waterfall, hidden in a cave that seemed to defy all natural laws. It was guarded by strange creatures unlike anything any of them had ever seen: half-man, half-beast, with eyes that glowed like embers in the darkness. But the greatest mystery of all lay within the box itself. For when they opened it, they found not gold or jewels, but a single sheet of parchment, yellowed with age and covered in writing that none of them could decipher. Blackwood's pulse quickened as he listened. This was more than just another piece of occult nonsense; this was the real thing. He knew it in his bones. "I must see this box for myself," he said, his voice barely above a whisper. "Take me to it." The miners exchanged uneasy glances before nodding their assent. They led him out of the saloon and into the night, down winding streets and through shadowy alleyways until they reached a nondescript building on the edge of town. Inside, they descended a rickety staircase to a dimly lit basement filled with crates and barrels. There, in the center of the room, sat the object of his obsession: a small, unassuming box that seemed to hum with energy. Blackwood reached out a trembling hand to touch it, feeling the power coursing through him like electricity. As he did so, the world around him shifted, and suddenly he was no longer in a dingy basement in Sacramento. He stood instead in a grand library, its shelves groaning under the weight of countless tomes and scrolls. A fire crackled in the hearth, casting flickering shadows on the walls. A figure emerged from the gloom, cloaked in darkness save for two piercing eyes that seemed to bore into his very soul. "You have found it," the voice said, deep and resonant. "The key to unlocking the secrets of the universe." Blackwood stared at the apparition, unable to speak. He knew he should be afraid, but all he felt was awe. "But be warned," the figure continued, "for with great power comes great responsibility. The knowledge contained within this box is not meant for mortal minds." And then, just as suddenly as it had appeared, the vision vanished, leaving Blackwood alone once more in the dimly lit room. He stared at the box, his mind racing. What did it all mean? As he pondered this question, a sudden commotion erupted upstairs. Shouts and gunfire filled the air, followed by the sound of heavy footsteps on the staircase. The miners had betrayed him! He snatched up the box and fled into the night, his heart pounding in his chest. He knew he couldn't stay in California - not now that others were after the artifact. No, he would have to take it far away, to a place where its secrets could be studied in safety. And so began a journey that would span continents and decades, from the wild frontier of the Gold Rush to the foggy streets of Victorian London. For Alistair Blackwood had stumbled upon something much greater than mere treasure; he held in his hands the key to unlocking the very fabric of reality itself. But little did he know that the true mystery was only just beginning... </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them. ### ASSISTANT Chapter 1 The cold bit into my bones as I trudged through the snow, each step a labored effort. My breath came out in ragged puffs, clouding the air around me. In my arms, I cradled the weight of my world - my commander, Hauptmann Müller, his body limp and heavy with pain. His eyes were closed, but I could hear the shallow rasp of each breath he took, a testament to his will to live. The forest around us was a maze of shadows and death, the trees standing like skeletal sentinels against the moonlit sky. The silence was deafening, broken only by the distant rumble of artillery and the occasional crack of a rifle shot. It was a cruel irony that the same snow that muffled our footsteps also betrayed the presence of the enemy, their tracks glowing like phosphorescent snakes in the pale light. I stumbled, my legs threatening to give way under the strain. Hauptmann Müller groaned, his hand clutching at my coat. "Keep moving, Leutnant," he whispered, his voice barely audible. "We must reach the defensive line." I nodded, gritting my teeth as I adjusted my grip on him. The pain in my arms and shoulders was a constant companion now, but it was nothing compared to the fear that gnawed at my gut. We were alone, cut off from our unit during the chaos of the American assault. The bunker we had been holed up in was now a tomb for the men who had trusted us to lead them. A flash of memory seared through my mind, taking me back to that fateful moment when it all went wrong. The thunderous roar of explosions, the earth shaking beneath our feet as if the very foundations of hell were breaking loose. The screams of men, the acrid smell of smoke and cordite, the taste of blood in my mouth. I remembered the look on Müller's face when he realized what was happening - a mixture of disbelief and determination. He had barked orders, trying to rally our troops, but it was too late. The Americans were upon us, their numbers overwhelming. I saw him take a bullet to the leg, his face contorting in agony as he fell. And then there was nothing but chaos and darkness. When I came to, the bunker was silent except for the moans of the dying. The air was thick with smoke, making it hard to breathe. I found Müller slumped against a wall, his uniform soaked in blood. He looked up at me, his eyes glassy with pain but still burning with that same fierce determination. "We have to go," he said, gripping my arm. "They're coming." And so we had fled into the night, leaving behind the wreckage of our command post and the bodies of our comrades. Now, hours later, we were still stumbling through the forest, searching for safety that seemed to recede with every step. I could feel my strength ebbing away, my legs turning to lead. But I couldn't stop, not now. Not when we were so close. The deep defensive line was our only hope, a thin red line that stood between us and annihilation. If we could just make it there... But fate is a cruel mistress, and she had other plans for us. As we staggered out of the trees and into a small clearing, I saw them - a patrol of American soldiers, their silhouettes stark against the snow. They spotted us at the same time, their rifles swinging around to point in our direction. "Halt!" one of them yelled, his voice harsh and commanding. "Hands up!" I froze, my heart pounding in my chest. Müller's grip on me tightened, his breath coming in short gasps. I knew what he was thinking - we couldn't surrender, not like this. We were too valuable, too close to the front lines. They would shoot us or take us prisoner, and either way, it would be a death sentence for him. "Run," he whispered, his voice barely audible. "Leave me." I shook my head, tears stinging my eyes. I couldn't do that. I wouldn't. He was my commander, my friend. I owed him everything. But before I could make a move, the air was filled with the sound of gunfire. Müller jerked in my arms, a wet gurgle escaping his lips as he slumped against me. I stared down at him in disbelief, feeling the warmth of his blood soaking through my coat. "Nein," I whispered, my voice choked with grief and rage. "Nein!" I fell to my knees, cradling his lifeless body as the world around me spun out of control. The Americans were closing in, their shouts growing louder. I knew it was over - for both of us. With a final, desperate cry, I raised my pistol and fired into the darkness, hoping to take at least one of them with me. But my aim was wild, my strength gone. The last thing I saw was a flash of light, followed by an explosion of pain in my chest. And then there was nothing but blackness. As I slipped away, I thought of home - of my mother's warm embrace, of the laughter of my little sister. And I knew that I would never see them again. This frozen hellhole would be my grave, and Hauptmann Müller's sacrifice would be forgotten by all but me. But in that moment, as death claimed me, I found a strange sort of peace. We had fought for our Fatherland until the very end, and we had done so with honor. And in the grand tapestry of history, perhaps that was all that mattered. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be *very* dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform. ### ASSISTANT In the dimly lit corner of a nondescript Mexican cantina, Anton Chigurh sat motionless, his eyes scanning the room like a predator assessing its prey. The air was thick with the smell of stale cigarette smoke and cheap tequila, but he remained unfazed, his senses heightened, attuned to every sound, every movement around him. He nursed a glass of water, the condensation beading on its surface like droplets of sweat on cold steel. His mind wandered, as it often did, to the concept of chance. The randomness of life and death fascinated him, the idea that one's fate could hinge on something as simple as a coin flip or the roll of dice. He believed in the purity of such decisions, untainted by human emotion or prejudice. It was a philosophy he had embraced wholeheartedly, and it had served him well in his line of work. A fly buzzed around his head, its tiny wings beating an erratic rhythm against the stillness of the room. He watched it with mild interest, wondering if its life would end here, on this very spot, or if it would continue its meaningless existence elsewhere. The thought brought a small, cruel smile to his lips. The bartender, a grizzled old man with a face like leather, eyed him warily from across the room. He knew better than to engage in conversation with this stranger, who had entered his establishment an hour ago and hadn't spoken a word since. There was something about him that made the hairs on the back of his neck stand up, something cold and deadly lurking just beneath the surface. Chigurh's thoughts were interrupted by the sound of the door creaking open. A man in a cheap suit entered, looking around nervously before making his way towards him. He slid into the seat across from Chigurh, who continued to stare straight ahead, as if he hadn't noticed the new arrival. "You're late," Chigurh said finally, his voice low and gravelly. The man in the suit swallowed hard, adjusting his tie. "I apologize. Traffic." Chigurh didn't respond, his eyes still fixed on some distant point. The man fidgeted, reaching into his pocket to retrieve a small envelope. He slid it across the table, and Chigurh picked it up without looking at him, tearing it open with a single, precise movement. Inside was a photograph of a man, along with an address scrawled on a piece of paper. He studied the image for a moment, committing every detail to memory before slipping it back into the envelope and sliding it into his own pocket. "You know what needs to be done," the man in the suit said, his voice barely above a whisper. Chigurh nodded once, his mind already turning to the task at hand. He stood up, leaving a few crumpled bills on the table for the water he hadn't touched. As he walked towards the door, he could feel the weight of the grenade in his other pocket, its cool metal surface pressing against his thigh like a lover's caress. Outside, the heat of the day was oppressive, but Chigurh barely registered it as he climbed into his car and started the engine. He drove through the dusty streets, his thoughts consumed by the upcoming job. The man in the photograph had a wife, two kids. It didn't matter. They were all just pieces on a board, pawns to be moved or removed as necessary. He arrived at the address just as the sun was beginning to set, casting long shadows across the dilapidated building. He sat in his car for several minutes, watching the comings and goings of the neighborhood, waiting for the right moment. When it came, he slipped out of the vehicle and approached the door with a sense of purpose. Inside, he could hear laughter and music, the sounds of a family enjoying their evening together. He smiled to himself as he pulled the grenade from his pocket, feeling its weight in his hand. This was what he lived for: the moment when chance decided who would live and who would die. He took a deep breath, savoring the anticipation, before pulling the pin with a satisfying click. He tossed it into the room, not bothering to wait for the explosion as he turned and walked away. The screams that followed were like music to his ears, a symphony of chaos and destruction. As he drove off into the night, Chigurh couldn't help but feel a sense of satisfaction. Another job well done, another life ended by the whims of fate. He reached into his pocket and pulled out a coin, flipping it idly between his fingers as he considered his next move. Heads or tails? Life or death? It was all the same to him. The world was a cruel place, and he was just its instrument, meting out justice in the only way he knew how: with cold, calculated precision. And if some people happened to get caught in the crossfire... well, that was just the luck of the draw. </details> Big thanks to: @sophosympatheia for working out the merge pattern, @Sao10K for creating Euryale and WinterGoddess, and @chargoddard for writing [Mergekit](https://github.com/arcee-ai/mergekit)!
hgnoi/hpDT0kcH2TBnGQTX
hgnoi
2024-05-24T16:59:48Z
137
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T16:58:05Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vuongnhathien/convnext-tiny-1e-4
vuongnhathien
2024-05-24T16:55:56Z
198
0
transformers
[ "transformers", "tensorboard", "safetensors", "convnextv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/convnextv2-tiny-22k-384", "base_model:finetune:facebook/convnextv2-tiny-22k-384", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-24T14:28:38Z
--- license: apache-2.0 base_model: facebook/convnextv2-tiny-22k-384 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: convnext-tiny-1e-4 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9341269841269841 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-tiny-1e-4 This model is a fine-tuned version of [facebook/convnextv2-tiny-22k-384](https://huggingface.co/facebook/convnextv2-tiny-22k-384) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2564 - Accuracy: 0.9341 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5828 | 1.0 | 550 | 0.4074 | 0.8875 | | 0.4404 | 2.0 | 1100 | 0.4093 | 0.8811 | | 0.3503 | 3.0 | 1650 | 0.3391 | 0.9018 | | 0.2636 | 4.0 | 2200 | 0.3079 | 0.9161 | | 0.2217 | 5.0 | 2750 | 0.3068 | 0.9169 | | 0.2024 | 6.0 | 3300 | 0.2839 | 0.9284 | | 0.1565 | 7.0 | 3850 | 0.2781 | 0.9324 | | 0.1203 | 8.0 | 4400 | 0.2708 | 0.9388 | | 0.1281 | 9.0 | 4950 | 0.2707 | 0.9364 | | 0.1014 | 10.0 | 5500 | 0.2693 | 0.9380 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
Augusto777/vit-base-patch16-224-U8-10b
Augusto777
2024-05-24T16:52:38Z
222
1
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-23T15:17:33Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-U8-10b results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.8627450980392157 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-U8-10b This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset "dmae-ve-U8". It achieves the following results on the evaluation set: - Loss: 0.5349 - Accuracy: 0.8627 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2473 | 1.0 | 20 | 1.1671 | 0.5882 | | 0.955 | 2.0 | 40 | 0.9392 | 0.6471 | | 0.735 | 3.0 | 60 | 0.7247 | 0.6863 | | 0.5341 | 4.0 | 80 | 0.5977 | 0.8235 | | 0.3864 | 5.0 | 100 | 0.6556 | 0.7451 | | 0.2837 | 6.0 | 120 | 0.6781 | 0.7255 | | 0.2332 | 7.0 | 140 | 0.5419 | 0.8431 | | 0.1974 | 8.0 | 160 | 0.5349 | 0.8627 | | 0.1857 | 9.0 | 180 | 0.5606 | 0.8235 | | 0.1907 | 10.0 | 200 | 0.4875 | 0.8431 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-772252
fine-tuned
2024-05-24T16:50:10Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-772252", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-24T16:49:32Z
--- license: apache-2.0 datasets: - fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-772252 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: custom ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-772252', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
paul-stansifer/nn-qwantz-letter-pool
paul-stansifer
2024-05-24T16:48:23Z
0
0
null
[ "safetensors", "pytorch_model_hub_mixin", "model_hub_mixin", "region:us" ]
null
2024-05-06T19:46:44Z
--- tags: - pytorch_model_hub_mixin - model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed] ```Score 117.221598997 75%: discarding 76% (cutoff is 0.4434) 90%: discarding 62% (cutoff is 0.2712) 95%: discarding 44% (cutoff is 0.1397) 99%: discarding 30% (cutoff is 0.0555) test loss: 0.5535977482795715```
behnamoh/Phi-3-medium-4k-instruct-Q4_0-GGUF
behnamoh
2024-05-24T16:43:57Z
15
1
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "multilingual", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-05-24T16:43:33Z
--- language: - multilingual license: mit tags: - nlp - code - llama-cpp - gguf-my-repo license_link: https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/resolve/main/LICENSE pipeline_tag: text-generation inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- # behnamoh/Phi-3-medium-4k-instruct-Q4_0-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-medium-4k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo behnamoh/Phi-3-medium-4k-instruct-Q4_0-GGUF --model phi-3-medium-4k-instruct-q4_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo behnamoh/Phi-3-medium-4k-instruct-Q4_0-GGUF --model phi-3-medium-4k-instruct-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && \ cd llama.cpp && \ make && \ ./main -m phi-3-medium-4k-instruct-q4_0.gguf -n 128 ```
JawadC/mont-d-or
JawadC
2024-05-24T16:43:35Z
1
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-24T16:16:36Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of Mont d'Or cheese widget: - text: A bowl of Mont d'or cheese on a rustic wooden table. output: url: image_0.png - text: A bowl of Mont d'or cheese on a rustic wooden table. output: url: image_1.png - text: A bowl of Mont d'or cheese on a rustic wooden table. output: url: image_2.png - text: A bowl of Mont d'or cheese on a rustic wooden table. output: url: image_3.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - JawadC/mont-d-or <Gallery /> ## Model description These are JawadC/mont-d-or LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Mont d'Or cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](JawadC/mont-d-or/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
ManyaGupta/distil-medium.en-ft-children-with-sli
ManyaGupta
2024-05-24T16:29:37Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "audio-classification", "generated_from_trainer", "base_model:distil-whisper/distil-medium.en", "base_model:finetune:distil-whisper/distil-medium.en", "license:mit", "endpoints_compatible", "region:us" ]
audio-classification
2024-05-24T12:58:32Z
--- license: mit base_model: distil-whisper/distil-medium.en tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: distil-medium.en-ft-children-with-sli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distil-medium.en-ft-children-with-sli This model is a fine-tuned version of [distil-whisper/distil-medium.en](https://huggingface.co/distil-whisper/distil-medium.en) on the ChildrenSLIDataset dataset. It achieves the following results on the evaluation set: - Loss: 0.0456 - Accuracy: 0.9960 - F1: 0.9958 - Precision: 0.9968 - Recall: 0.9949 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.0001 | 1.0 | 504 | 0.2825 | 0.9643 | 0.9619 | 0.9722 | 0.9545 | | 0.0009 | 2.0 | 1008 | 0.0834 | 0.9881 | 0.9875 | 0.9884 | 0.9866 | | 0.0002 | 3.0 | 1512 | 0.0442 | 0.9921 | 0.9917 | 0.9935 | 0.9899 | | 0.0001 | 4.0 | 2016 | 0.1658 | 0.9762 | 0.9751 | 0.9736 | 0.9768 | | 0.0 | 5.0 | 2520 | 0.0390 | 0.9960 | 0.9958 | 0.9968 | 0.9949 | | 0.0 | 6.0 | 3024 | 0.0420 | 0.9960 | 0.9958 | 0.9968 | 0.9949 | | 0.0 | 7.0 | 3528 | 0.0436 | 0.9960 | 0.9958 | 0.9968 | 0.9949 | | 0.0 | 8.0 | 4032 | 0.0446 | 0.9960 | 0.9958 | 0.9968 | 0.9949 | | 0.0 | 9.0 | 4536 | 0.0452 | 0.9960 | 0.9958 | 0.9968 | 0.9949 | | 0.0 | 10.0 | 5040 | 0.0456 | 0.9960 | 0.9958 | 0.9968 | 0.9949 | ### Framework versions - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
muellerzr/llama-3-8B-self-instruct-LoRA
muellerzr
2024-05-24T16:23:46Z
0
2
peft
[ "peft", "safetensors", "llama", "generated_from_trainer", "dataset:muellerzr/llama-3-8b-self-align-data-generation-results", "license:llama3", "region:us" ]
null
2024-05-24T14:00:17Z
--- library_name: peft tags: - generated_from_trainer base_model: meta-llama/Llama-3-8B model-index: - name: qlora_decrease_lr_promptfix results: [] license: llama3 datasets: - muellerzr/llama-3-8b-self-align-data-generation-results --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> ## Llama-3 8B Self-Instruct: PEFT Edition This model is the result of recreating the [StarCoder2 Self-Instruct](https://huggingface.co/blog/sc2-instruct) pipeline, but applied to Llama-3-8B. It could not have been done without the blood, sweat, and tears of my dear friends who have helped me along the way with training my first LLM. A blog will come shortly detailing the many training runs and failures during this. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: llama3-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: true strict: false datasets: - path: llama-3-8b-self-align-data-generation-results/sanitized.jsonl ds_type: json type: system_prompt: "You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions." field_system: system field_instruction: instruction field_output: response format: "### Instruction:\n{instruction}\n\n### Response:\n" no_input_format: "### Instruction:\n{instruction}\n\n### Response:\n" dataset_prepared_path: val_set_size: 0.05 sequence_len: 2048 sample_packing: true pad_to_sequence_len: true adapter: qlora save_safetensors: true lora_model_dir: lora_r: 64 lora_alpha: 32 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: log_with: None wandb_project: llama-3-8b-self-align-axolotl wandb_entity: wandb_watch: wandb_name: qlora-prince-hps-promptfix output_dir: qlora_decrease_lr_promptfix wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 2 num_epochs: 4 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 8 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 2 debug: deepspeed: weight_decay: 0.0 fsdp: - full_shard - auto_wrap fsdp_config: fsdp_limit_all_gathers: true fsdp_sync_module_states: false fsdp_offload_params: false fsdp_use_orig_params: false fsdp_cpu_ram_efficient_loading: false fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer fsdp_state_dict_type: FULL_STATE_DICT fsdp_sharding_strategy: FULL_SHARD special_tokens: eos_token: "<|im_end|>" pad_token: "<|end_of_text|>" tokens: - "<|im_start|>" - "<|im_end|>" lora_modules_to_save: - embed_tokens - lm_head ``` </details><br> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/muellerzr/llama-3-8b-self-align-axolotl/runs/2q8jhm3e) # qlora_decrease_lr_promptfix This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4121 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.6903 | 0.0061 | 1 | 0.6706 | | 0.6463 | 0.1285 | 21 | 0.6392 | | 0.4944 | 0.2571 | 42 | 0.4806 | | 0.4495 | 0.3856 | 63 | 0.4532 | | 0.4444 | 0.5142 | 84 | 0.4406 | | 0.4185 | 0.6427 | 105 | 0.4334 | | 0.4336 | 0.7712 | 126 | 0.4286 | | 0.4061 | 0.8998 | 147 | 0.4252 | | 0.4002 | 1.0145 | 168 | 0.4221 | | 0.4013 | 1.1431 | 189 | 0.4205 | | 0.3674 | 1.2716 | 210 | 0.4189 | | 0.3942 | 1.4002 | 231 | 0.4175 | | 0.3984 | 1.5287 | 252 | 0.4165 | | 0.3867 | 1.6572 | 273 | 0.4150 | | 0.3872 | 1.7858 | 294 | 0.4137 | | 0.401 | 1.9143 | 315 | 0.4130 | | 0.3602 | 2.0275 | 336 | 0.4126 | | 0.3817 | 2.1561 | 357 | 0.4131 | | 0.3592 | 2.2846 | 378 | 0.4129 | | 0.3729 | 2.4132 | 399 | 0.4127 | | 0.372 | 2.5417 | 420 | 0.4121 | | 0.3685 | 2.6702 | 441 | 0.4120 | | 0.3732 | 2.7988 | 462 | 0.4115 | | 0.38 | 2.9273 | 483 | 0.4112 | | 0.3637 | 3.0413 | 504 | 0.4114 | | 0.3628 | 3.1699 | 525 | 0.4118 | | 0.355 | 3.2984 | 546 | 0.4122 | | 0.3646 | 3.4269 | 567 | 0.4121 | | 0.3496 | 3.5555 | 588 | 0.4121 | | 0.3573 | 3.6840 | 609 | 0.4121 | | 0.3598 | 3.8125 | 630 | 0.4121 | | 0.3669 | 3.9411 | 651 | 0.4121 | ### Framework versions - PEFT 0.11.1 - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu118 - Datasets 2.19.1 - Tokenizers 0.19.1
Omriy123/vit_epochs5_batch64_lr0.001_size224_tiles1_seed1_vit_old_transform_old_hp
Omriy123
2024-05-24T16:16:32Z
201
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-24T15:27:10Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit_epochs5_batch64_lr0.001_size224_tiles1_seed1_vit_old_transform_old_hp results: - task: name: Image Classification type: image-classification dataset: name: Dogs_vs_Cats type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.7538666666666667 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit_epochs5_batch64_lr0.001_size224_tiles1_seed1_vit_old_transform_old_hp This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Dogs_vs_Cats dataset. It achieves the following results on the evaluation set: - Loss: 0.5220 - Accuracy: 0.7539 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6668 | 1.0 | 235 | 0.6653 | 0.5725 | | 0.6527 | 2.0 | 470 | 0.6233 | 0.6528 | | 0.5628 | 3.0 | 705 | 0.5658 | 0.7048 | | 0.4683 | 4.0 | 940 | 0.5314 | 0.7291 | | 0.3694 | 5.0 | 1175 | 0.5220 | 0.7539 | ### Framework versions - Transformers 4.41.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
sezing/mistral_instruct_generation
sezing
2024-05-24T16:15:20Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2024-03-21T17:57:35Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.1 datasets: - generator model-index: - name: mistral_instruct_generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_instruct_generation This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.2524 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.5825 | 0.0238 | 20 | 1.4437 | | 1.5183 | 0.0476 | 40 | 1.3753 | | 1.4693 | 0.0713 | 60 | 1.2976 | | 1.4309 | 0.0951 | 80 | 1.2623 | | 1.3628 | 0.1189 | 100 | 1.2524 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
fine-tuned/SCIDOCS-256-24-gpt-4o-2024-05-13-423936
fine-tuned
2024-05-24T16:14:19Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/SCIDOCS-256-24-gpt-4o-2024-05-13-423936", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-24T16:13:48Z
--- license: apache-2.0 datasets: - fine-tuned/SCIDOCS-256-24-gpt-4o-2024-05-13-423936 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: custom ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SCIDOCS-256-24-gpt-4o-2024-05-13-423936', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-812157
fine-tuned
2024-05-24T16:13:45Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-812157", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-24T16:13:12Z
--- license: apache-2.0 datasets: - fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-812157 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: custom ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/SciFact-256-24-gpt-4o-2024-05-13-812157', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
hgnoi/3l27nduJKr5IfdHV
hgnoi
2024-05-24T16:12:29Z
136
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T16:10:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Augusto777/vit-base-patch16-224-U8-10c
Augusto777
2024-05-24T16:11:35Z
222
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-base-patch16-224", "base_model:finetune:google/vit-base-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-05-24T14:43:52Z
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-U8-10c results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.8823529411764706 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-U8-10c This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6082 - Accuracy: 0.8824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2589 | 1.0 | 20 | 1.1823 | 0.5882 | | 0.9852 | 2.0 | 40 | 0.9538 | 0.6471 | | 0.7617 | 3.0 | 60 | 0.7493 | 0.7255 | | 0.5658 | 4.0 | 80 | 0.6082 | 0.8824 | | 0.4212 | 5.0 | 100 | 0.6274 | 0.8235 | | 0.3139 | 6.0 | 120 | 0.5643 | 0.7843 | | 0.2755 | 7.0 | 140 | 0.4752 | 0.8824 | | 0.2262 | 8.0 | 160 | 0.5290 | 0.8039 | | 0.2151 | 9.0 | 180 | 0.5825 | 0.7843 | | 0.2126 | 10.0 | 200 | 0.5307 | 0.8235 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
JawadC/gruyere
JawadC
2024-05-24T16:10:05Z
1
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-24T15:43:12Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of Gruyère cheese widget: - text: A piece of Gruyère cheese on a rustic wooden table. output: url: image_0.png - text: A piece of Gruyère cheese on a rustic wooden table. output: url: image_1.png - text: A piece of Gruyère cheese on a rustic wooden table. output: url: image_2.png - text: A piece of Gruyère cheese on a rustic wooden table. output: url: image_3.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - JawadC/gruyere <Gallery /> ## Model description These are JawadC/gruyere LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Gruyère cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](JawadC/gruyere/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
randomani/DDIM-tattoo-32
randomani
2024-05-24T16:09:21Z
5
0
diffusers
[ "diffusers", "safetensors", "art", "en", "dataset:Drozdik/tattoo_v0", "license:apache-2.0", "diffusers:DDIMPipeline", "region:us" ]
null
2024-05-24T09:02:48Z
--- library_name: diffusers license: apache-2.0 datasets: - Drozdik/tattoo_v0 language: - en tags: - art --- ## Model Details **Abstract**: *Trained a Unconditional Diffusion Model on tattoo dataset with DDIM noise scheduler * ## Inference **DDPM** models can use *discrete noise schedulers* such as: - [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py) - [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py) - [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py) for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest. For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead. See the following code: ```python # !pip install diffusers from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline model_id = "google/DDIM-tattoo-32" # load model and scheduler ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference # run pipeline in inference (sample random noise and denoise) image = ddpm().images[0] # save image image.save("ddpm_generated_image.png") ```
randomani/DDPM-emoji-64
randomani
2024-05-24T16:08:40Z
4
0
diffusers
[ "diffusers", "safetensors", "art", "en", "dataset:valhalla/emoji-dataset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2024-05-24T15:50:08Z
--- library_name: diffusers license: apache-2.0 datasets: - valhalla/emoji-dataset language: - en tags: - art --- ## Model Details **Abstract**: *Trained an Unconditional Diffusion Model on emoji dataset with DDPM noise scheduler * ## Inference **DDPM** models can use *discrete noise schedulers* such as: - [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py) - [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py) - [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py) for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest. For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead. See the following code: ```python # !pip install diffusers from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline model_id = "google/DDPM-emoji-64" # load model and scheduler ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference # run pipeline in inference (sample random noise and denoise) image = ddpm().images[0] # save image image.save("ddpm_generated_image.png") ``` ## Samples Generated 1. ![sample_1](https://huggingface.co/randomani/DDPM-emoji-64/blob/main/1.png) 2. ![sample_2](https://huggingface.co/randomani/DDPM-emoji-64/blob/main/2.png) 3. ![sample_3](https://huggingface.co/randomani/DDPM-emoji-64/blob/main/3.png) 4. ![sample_4](https://huggingface.co/randomani/DDPM-emoji-64/blob/main/4.png)
DioBot2000/Frozen11-8epoch-BERT-multilingual-finetuned-CEFR_ner-3000news
DioBot2000
2024-05-24T16:08:27Z
108
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-24T16:08:00Z
--- license: apache-2.0 base_model: bert-base-multilingual-cased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: Frozen11-8epoch-BERT-multilingual-finetuned-CEFR_ner-3000news results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Frozen11-8epoch-BERT-multilingual-finetuned-CEFR_ner-3000news This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6206 - Accuracy: 0.3654 - Precision: 0.5146 - Recall: 0.5208 - F1: 0.4022 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 132 | 0.6667 | 0.3546 | 0.5202 | 0.4775 | 0.3676 | | No log | 2.0 | 264 | 0.6564 | 0.3574 | 0.5171 | 0.4956 | 0.3796 | | No log | 3.0 | 396 | 0.6472 | 0.3599 | 0.5112 | 0.4998 | 0.3840 | | 0.6062 | 4.0 | 528 | 0.6354 | 0.3622 | 0.5107 | 0.5109 | 0.3927 | | 0.6062 | 5.0 | 660 | 0.6282 | 0.3641 | 0.5198 | 0.5115 | 0.3962 | | 0.6062 | 6.0 | 792 | 0.6254 | 0.3647 | 0.5192 | 0.5176 | 0.3988 | | 0.6062 | 7.0 | 924 | 0.6212 | 0.3653 | 0.5156 | 0.5224 | 0.4040 | | 0.5499 | 8.0 | 1056 | 0.6206 | 0.3654 | 0.5146 | 0.5208 | 0.4022 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
ritam567/reo_first
ritam567
2024-05-24T16:05:46Z
0
0
null
[ "license:mit", "region:us" ]
null
2024-05-24T14:10:59Z
--- title: Sales RAG emoji: 🌍 colorFrom: yellow colorTo: green sdk: docker pinned: false license: mit --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
FredrikBL/MisTyr-ties
FredrikBL
2024-05-24T16:03:17Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "timpal0l/Mistral-7B-v0.1-flashback-v2", "mlabonne/NeuralHermes-2.5-Mistral-7B", "RJuro/munin-neuralbeagle-7b", "base_model:RJuro/munin-neuralbeagle-7b", "base_model:merge:RJuro/munin-neuralbeagle-7b", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:timpal0l/Mistral-7B-v0.1-flashback-v2", "base_model:merge:timpal0l/Mistral-7B-v0.1-flashback-v2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T15:59:31Z
--- tags: - merge - mergekit - lazymergekit - timpal0l/Mistral-7B-v0.1-flashback-v2 - mlabonne/NeuralHermes-2.5-Mistral-7B - RJuro/munin-neuralbeagle-7b base_model: - timpal0l/Mistral-7B-v0.1-flashback-v2 - mlabonne/NeuralHermes-2.5-Mistral-7B - RJuro/munin-neuralbeagle-7b --- # MisTyr-ties MisTyr-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [timpal0l/Mistral-7B-v0.1-flashback-v2](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) * [RJuro/munin-neuralbeagle-7b](https://huggingface.co/RJuro/munin-neuralbeagle-7b) ## 🧩 Configuration ```yaml models: - model: AI-Sweden-Models/tyr # no parameters necessary for base model - model: timpal0l/Mistral-7B-v0.1-flashback-v2 parameters: density: 0.5 weight: 0.5 - model: mlabonne/NeuralHermes-2.5-Mistral-7B parameters: density: 0.5 weight: 0.3 - model: RJuro/munin-neuralbeagle-7b parameters: density: 0.5 weight: [0, 0.3, 0.7, 1] # weight gradient merge_method: ties base_model: AI-Sweden-Models/tyr parameters: normalize: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "FredrikBL/MisTyr-ties" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
opencsg/csg-wukong-1B-VL-v0.1
opencsg
2024-05-24T15:55:51Z
62
2
transformers
[ "transformers", "safetensors", "csg-vl-wukong", "text-generation", "code", "image-to-text", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-to-text
2024-04-30T16:13:22Z
--- language: - en pipeline_tag: image-to-text tags: - code license: apache-2.0 --- # **csg-wukong-1B-VL-v0.1** [[中文]](#chinese) [[English]](#english) <a id="english"></a> <p align="center"> <img width="900px" alt="OpenCSG" src="./csg-wukong-logo-green.jpg"> </p> <p align="center"><a href="https://opencsg.com/models">[OpenCSG Community]</a> <a href="https://github.com/opencsgs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a> <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p> </div> OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models. The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively. ## Model Description [CSG-VL](https://github.com/OpenCSGs/csg-vl) is a family of small but strong multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Wukong-1B, Llama-3-8B, Phi-1.5, StableLM-2, Qwen1.5 and Phi-2. ## Quickstart Here we show a code snippet to show you how to use the model with transformers. Before running the snippet, you need to install the following dependencies: ```shell pip install torch transformers accelerate pillow ``` If the CUDA memory is enough, it would be faster to execute this snippet by setting `CUDA_VISIBLE_DEVICES=0`. Users especially those in Chinese mainland may want to refer to a [OpenCSG.com](https://opencsg.com). ```python import torch import transformers from transformers import AutoModelForCausalLM, AutoTokenizer from PIL import Image import warnings # disable some warnings transformers.logging.set_verbosity_error() transformers.logging.disable_progress_bar() warnings.filterwarnings('ignore') # set device torch.set_default_device('cpu') # or 'cuda' model_name = 'opencsg/csg-wukong-1B-VL-v0.1' # create model model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, device_map='auto', trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True) # text prompt prompt = 'What is the astronaut holding in his hand?' text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:" text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')] input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1], dtype=torch.long).unsqueeze(0) image = Image.open('example_1.png') image_tensor = model.process_images([image], model.config).to(dtype=model.dtype) # generate output_ids = model.generate( input_ids, images=image_tensor, max_new_tokens=100, use_cache=True)[0] print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip()) ``` ## Software - **Orchestration:** [Deepspeed](https://github.com/OpenCSGs) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) - **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex) <a id="chinese"></a> <p> </p> # OpenCSG介绍 <p align="center"> <img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg"> </p> <p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a> <a href="https://github.com/opencsgs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a> <a href="https://twitter.com/OpenCsg">[推特]</a> </p> </div> OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。 OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。 ## 模型介绍 [CSG-VL](https://github.com/OpenCSGs/csg-vl) 是一个小型但强大的多模式模型系列。它提供多种即插即用视觉编码器,如 EVA-CLIP、SigLIP 和语言主干,包括 Wukong-1B、Llama-3-8B、Phi-1.5、StableLM-2、Qwen1.5 和 Phi-2。 ## 快速开始 下面我们将展示一个代码片段,告诉您如何使用带有转换器的模型。 在运行该代码段之前,您需要安装以下依赖项: ```shell pip install torch transformers accelerate pillow ``` 如果 CUDA 内存足够,通过设置 CUDA_VISIBLE_DEVICES=0 来执行此代码片段会更快。 用户尤其是中国大陆的用户可能需要参考 [OpenCSG.com](https://opencsg.com)。 ```python import torch import transformers from transformers import AutoModelForCausalLM, AutoTokenizer from PIL import Image import warnings # disable some warnings transformers.logging.set_verbosity_error() transformers.logging.disable_progress_bar() warnings.filterwarnings('ignore') # set device torch.set_default_device('cpu') # or 'cuda' model_name = 'opencsg/csg-wukong-1B-VL-v0.1' # create model model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, device_map='auto', trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True) # text prompt prompt = 'What is the astronaut holding in his hand?' text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:" text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')] input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1], dtype=torch.long).unsqueeze(0) image = Image.open('example_1.png') image_tensor = model.process_images([image], model.config).to(dtype=model.dtype) # generate output_ids = model.generate( input_ids, images=image_tensor, max_new_tokens=100, use_cache=True)[0] print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip()) ``` ## 硬件资源 - **GPU数量:** 16 H800 - **训练时间:** 43天 ## 软件使用 - **微调训练框架:** [Deepspeed](https://github.com/OpenCSGs) - **深度学习框架:** [PyTorch](https://github.com/pytorch/pytorch) - **BP16:** [apex](https://github.com/NVIDIA/apex)
saransh03sharma/mintrec2-llama-2-13b-50-1
saransh03sharma
2024-05-24T15:55:29Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T15:47:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ninyx/Mistral-7B-Instruct-v0.2-advisegpt-v0.5
ninyx
2024-05-24T15:54:49Z
2
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-05-21T20:13:11Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-Instruct-v0.2 datasets: - generator metrics: - bleu - rouge model-index: - name: Mistral-7B-Instruct-v0.2-advisegpt-v0.5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-Instruct-v0.2-advisegpt-v0.5 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.0840 - Bleu: {'bleu': 0.9537910015397628, 'precisions': [0.9763005593772222, 0.9591762297332277, 0.9471223357463351, 0.9370695448087227], 'brevity_penalty': 0.9989373668126428, 'length_ratio': 0.9989379310075293, 'translation_length': 1022387, 'reference_length': 1023474} - Rouge: {'rouge1': 0.9741038510844018, 'rouge2': 0.9550445541823809, 'rougeL': 0.9723656951648176, 'rougeLsum': 0.9736935611588988} - Exact Match: {'exact_match': 0.0} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 3 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 10 - total_train_batch_size: 30 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Bleu | Exact Match | Validation Loss | Rouge | |:-------------:|:------:|:----:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------:|:---------------:|:---------------------------------------------------------------------------------------------------------------------------:| | 0.069 | 0.9999 | 829 | {'bleu': 0.9459206747141892, 'brevity_penalty': 0.998656611989374, 'length_ratio': 0.9986575135274565, 'precisions': [0.9726768417963018, 0.9521542081327253, 0.9380288226144853, 0.9265355643009697], 'reference_length': 1023474, 'translation_length': 1022100} | {'exact_match': 0.0} | 0.0990 | {'rouge1': 0.9702189356306301, 'rouge2': 0.9472171244648081, 'rougeL': 0.9677029434775739, 'rougeLsum': 0.9695684693436178} | | 0.0501 | 1.9999 | 1658 | {'bleu': 0.9537910015397628, 'brevity_penalty': 0.9989373668126428, 'length_ratio': 0.9989379310075293, 'precisions': [0.9763005593772222, 0.9591762297332277, 0.9471223357463351, 0.9370695448087227], 'reference_length': 1023474, 'translation_length': 1022387} | {'exact_match': 0.0} | 0.0840 | {'rouge1': 0.9741105562035488, 'rouge2': 0.9550205654651982, 'rougeL': 0.9723363685950056, 'rougeLsum': 0.9737013621980013} | | 0.0479 | 2.9999 | 2487 | 0.0850 | {'bleu': 0.9548514568958526, 'precisions': [0.9767648848122783, 0.9601353822381405, 0.9483682511725553, 0.9385079979703334], 'brevity_penalty': 0.9989676875356347, 'length_ratio': 0.9989682200036347, 'translation_length': 1022418, 'reference_length': 1023474}| {'rouge1': 0.9746456572659052, 'rouge2': 0.9560608145101823, 'rougeL': 0.9729518327172596, 'rougeLsum': 0.9742472834405176}| {'exact_match': 0.0} | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
Ramikan-BR/tinyllama-coder-py-4bit_LORA-v8
Ramikan-BR
2024-05-24T15:54:42Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "base_model:finetune:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-24T15:54:24Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl base_model: unsloth/tinyllama-chat-bnb-4bit --- # Uploaded model - **Developed by:** Ramikan-BR - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nlpzhaof/aligngpt-7b
nlpzhaof
2024-05-24T15:54:21Z
4
0
transformers
[ "transformers", "pytorch", "aligngpt", "text-generation", "en", "arxiv:2405.14129", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T05:51:51Z
--- license: apache-2.0 language: - en --- # AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability [[Project Page](https://aligngpt-vl.github.io/)] [[Paper](https://arxiv.org/abs/2405.14129)] [[Demo](http://47.116.173.89:7870/)] [[Model](https://huggingface.co/nlpzhaof)] Authors: [Fei Zhao*](https://scholar.google.com/citations?user=V01xzWQAAAAJ&hl=zh-CN), Taotian Pang*, Chunhui Li, [Zhen Wu](https://scholar.google.com/citations?user=IoGlgtoAAAAJ&hl=zh-CN), Junjie Guo, Shangyu Xing, [Xinyu Dai](https://scholar.google.com/citations?user=zpWB1CgAAAAJ&hl=zh-CN) ## News and Updates - [5/24] 🔥 We released **AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability**. Checkout the [paper](https://arxiv.org/abs/2405.14129) and [demo](http://47.116.173.89:7870/). ## Model Zoo | Model | LLM | Vision Backbone | Pre-training | Instruct-tuning | |----------|----------|-----------|---|---| | AlignGPT-7B | [Vicuna 7B](https://huggingface.co/lmsys/vicuna-7b-v1.5) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |[aligngpt-7b-pretrain](https://huggingface.co/nlpzhaof/aligngpt-7b-pretrain/tree/main)| [aligngpt-7b](https://huggingface.co/nlpzhaof/aligngpt-7b/tree/main)| | AlignGPT-13B | [Vicuna 13B](https://huggingface.co/lmsys/vicuna-13b-v1.5) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |[aligngpt-13b-pretrain](https://huggingface.co/nlpzhaof/aligngpt-13b-pretrain/tree/main)| [aligngpt-13b](https://huggingface.co/nlpzhaof/aligngpt-13b/tree/main)| | AlignGPT-LLaMA2 | [LLaMA-2-7B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |To be released| To be released| | AlignGPT-LLaMA3 | [LLaMA-3-8B-Base](https://huggingface.co/meta-llama/Meta-Llama-3-8B) | [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14-336) |To be released|To be released| ## Performance | Model | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MME | MM-Bench | MM-Bench-CN | SEED | LLaVA-Bench-Wild | MM-Vet | |----------|---|---|---|---|---|---|---|---|---|---|---|---| | AlignGPT-7B | 79.1 | 62.9 | 54.2 | 68.5 | 58.4 | 86.0 | 1527.4 | 67.3 | 59.9 | 66.5 | 68.4 | 30.8 | | AlignGPT-13B | 80.0 | 63.6 | 56.4 | 70.3 | 60.2 | 86.2 | 1572.0 | 69.5 | 63.7 | 67.8 | 75.2 | 35.6 | ## Citation If you find AlignGPT useful for your research and applications, please cite using this BibTeX: ``` @misc{zhao2024aligngpt, title={AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability}, author={Fei Zhao and Taotian Pang and Chunhui Li and Zhen Wu and Junjie Guo and Shangyu Xing and Xinyu Dai}, year={2024}, eprint={2405.14129}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License [![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)[![Data License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE) The data and checkpoint is intended and licensed for research use only. They are also restricted to uses that follow the license agreement of LLaMA, Vicuna and GPT-4. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
akshayjambhulkar/phi-3-mini-4k-fine-tuned-customer_support
akshayjambhulkar
2024-05-24T15:45:29Z
79
1
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T11:10:38Z
The uploaded model is a fine-tuned version of the Phi-3-mini model, designed for telecom customer support tasks. Here are the key details: - **Developer**: beingjammy - **License**: Apache-2.0 - **Base Model**: unsloth/Phi-3-mini-4k-instruct-bnb-4bit ### Highlights - **Training Efficiency**: The model was trained 2x faster using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. - **Purpose**: Fine-tuned specifically for handling telecom customer support queries. - **Model Benefits**: - **Efficiency**: Optimized for faster training without compromising performance. - **Specialization**: Tailored to understand and respond to customer support issues in the telecom sector. ### Usage This model can be integrated into customer support systems to provide automated responses, streamline support processes, and improve customer satisfaction by providing quick and accurate answers to common queries. ### Resources - **Unsloth**: A tool for optimizing and speeding up the training process. - [Unsloth GitHub](https://github.com/unslothai/unsloth) - **TRL Library**: Used for fine-tuning transformer models. - [Huggingface's TRL Library](https://github.com/huggingface/trl) ![Unsloth Made with Love](https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png) ### License The model is distributed under the Apache-2.0 license, which allows for both commercial and non-commercial use, modification, and distribution.
Jyotirmoy-Cluster/llama3-FT
Jyotirmoy-Cluster
2024-05-24T15:44:43Z
7
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-23T20:45:01Z
# Llama3 Model - Fine Tuned ## Introduction Welcome to the Llama3 Model README! This document provides comprehensive guidelines on using your finetuned Llama3 model tailored for personal usage. The model has been optimized to deliver high performance on specific tasks while maintaining ease of use.
fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315
fine-tuned
2024-05-24T15:37:35Z
6
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-05-24T15:37:03Z
--- license: apache-2.0 datasets: - fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315 - allenai/c4 language: - en pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: custom ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim model = SentenceTransformer( 'fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315', trust_remote_code=True ) embeddings = model.encode([ 'first text to embed', 'second text to embed' ]) print(cos_sim(embeddings[0], embeddings[1])) ```
mradermacher/Llama-3-Chatty-2x8B-GGUF
mradermacher
2024-05-24T15:36:41Z
17
0
transformers
[ "transformers", "gguf", "merge", "en", "base_model:Undi95/Llama-3-Chatty-2x8B", "base_model:quantized:Undi95/Llama-3-Chatty-2x8B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-05-24T14:48:43Z
--- base_model: Undi95/Llama-3-Chatty-2x8B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Undi95/Llama-3-Chatty-2x8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.Q2_K.gguf) | Q2_K | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.IQ3_XS.gguf) | IQ3_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.Q3_K_S.gguf) | Q3_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.IQ3_S.gguf) | IQ3_S | 6.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.IQ3_M.gguf) | IQ3_M | 6.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.Q3_K_M.gguf) | Q3_K_M | 6.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.Q3_K_L.gguf) | Q3_K_L | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.IQ4_XS.gguf) | IQ4_XS | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.Q4_K_S.gguf) | Q4_K_S | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.Q4_K_M.gguf) | Q4_K_M | 8.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.Q5_K_S.gguf) | Q5_K_S | 9.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.Q5_K_M.gguf) | Q5_K_M | 9.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.Q6_K.gguf) | Q6_K | 11.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Chatty-2x8B-GGUF/resolve/main/Llama-3-Chatty-2x8B.Q8_0.gguf) | Q8_0 | 14.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
DioBot2000/FrozenLAST-8epoch-BERT-multilingual-finetuned-CEFR_ner-3000news
DioBot2000
2024-05-24T15:35:14Z
119
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-05-24T15:34:44Z
--- license: apache-2.0 base_model: bert-base-multilingual-cased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: FrozenLAST-8epoch-BERT-multilingual-finetuned-CEFR_ner-3000news results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FrozenLAST-8epoch-BERT-multilingual-finetuned-CEFR_ner-3000news This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4905 - Accuracy: 0.3967 - Precision: 0.4610 - Recall: 0.6222 - F1: 0.4287 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 132 | 0.6716 | 0.3464 | 0.4999 | 0.4456 | 0.3124 | | No log | 2.0 | 264 | 0.5548 | 0.3736 | 0.4556 | 0.5207 | 0.3660 | | No log | 3.0 | 396 | 0.5041 | 0.3844 | 0.4392 | 0.5635 | 0.3838 | | 0.6267 | 4.0 | 528 | 0.4987 | 0.3885 | 0.4568 | 0.5729 | 0.4028 | | 0.6267 | 5.0 | 660 | 0.4802 | 0.3938 | 0.4690 | 0.6042 | 0.4264 | | 0.6267 | 6.0 | 792 | 0.4832 | 0.3953 | 0.4630 | 0.6112 | 0.4224 | | 0.6267 | 7.0 | 924 | 0.4870 | 0.3952 | 0.4578 | 0.6174 | 0.4251 | | 0.3073 | 8.0 | 1056 | 0.4905 | 0.3967 | 0.4610 | 0.6222 | 0.4287 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
JawadC/comte
JawadC
2024-05-24T15:33:06Z
1
1
diffusers
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-05-24T15:04:02Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of Comté cheese widget: - text: A piece of Comté cheese on a rustic wooden table. output: url: image_0.png - text: A piece of Comté cheese on a rustic wooden table. output: url: image_1.png - text: A piece of Comté cheese on a rustic wooden table. output: url: image_2.png - text: A piece of Comté cheese on a rustic wooden table. output: url: image_3.png --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - JawadC/comte <Gallery /> ## Model description These are JawadC/comte LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of Comté cheese to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](JawadC/comte/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
quangtqv/cross_backbone_tool_learning_beta_24_5_v3
quangtqv
2024-05-24T15:30:14Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-24T15:30:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
quangtqv/cross_encoder_tool_learning_beta_24_5_v3
quangtqv
2024-05-24T15:30:10Z
166
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-24T15:28:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
datek/gemma-2b-flock-1716564350
datek
2024-05-24T15:28:21Z
152
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T15:25:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
quangtqv/cross_encoder_tool_learning_beta_24_5_v2
quangtqv
2024-05-24T15:27:20Z
184
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-05-24T15:22:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hgnoi/99lk5xJmVosAsO27
hgnoi
2024-05-24T15:25:34Z
136
0
transformers
[ "transformers", "safetensors", "stablelm", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-24T15:23:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MaulikMadhavi/paligemma_vqav2
MaulikMadhavi
2024-05-24T15:20:36Z
3
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "dataset:vq_av2", "base_model:google/paligemma-3b-pt-224", "base_model:adapter:google/paligemma-3b-pt-224", "license:gemma", "region:us" ]
null
2024-05-24T11:57:34Z
--- license: gemma library_name: peft tags: - generated_from_trainer base_model: google/paligemma-3b-pt-224 datasets: - vq_av2 model-index: - name: paligemma_vqav2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # paligemma_vqav2 This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the vq_av2 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.11.1 - Transformers 4.42.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
imdatta0/llama_2_7b_Magiccoder_evol_reverse
imdatta0
2024-05-24T15:19:35Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "base_model:adapter:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-05-24T15:19:30Z
--- license: llama2 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Llama-2-7b-hf model-index: - name: llama_2_7b_Magiccoder_evol_downNupNgateNqNkNvNo_r8_lr0.0001_bg88_alpha8_0_41_reverseinit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama_2_7b_Magiccoder_evol_downNupNgateNqNkNvNo_r8_lr0.0001_bg88_alpha8_0_41_reverseinit This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1235 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 0.02 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2502 | 0.0203 | 31 | 1.2151 | | 1.1432 | 0.0405 | 62 | 1.1929 | | 1.1409 | 0.0608 | 93 | 1.1804 | | 1.1494 | 0.0810 | 124 | 1.1749 | | 1.1213 | 0.1013 | 155 | 1.1669 | | 1.1207 | 0.1215 | 186 | 1.1610 | | 1.1488 | 0.1418 | 217 | 1.1596 | | 1.1185 | 0.1620 | 248 | 1.1558 | | 1.1321 | 0.1823 | 279 | 1.1539 | | 1.1031 | 0.2025 | 310 | 1.1509 | | 1.0976 | 0.2228 | 341 | 1.1506 | | 1.1203 | 0.2431 | 372 | 1.1452 | | 1.1118 | 0.2633 | 403 | 1.1472 | | 1.1198 | 0.2836 | 434 | 1.1451 | | 1.1149 | 0.3038 | 465 | 1.1436 | | 1.1028 | 0.3241 | 496 | 1.1390 | | 1.1137 | 0.3443 | 527 | 1.1387 | | 1.1014 | 0.3646 | 558 | 1.1381 | | 1.1078 | 0.3848 | 589 | 1.1378 | | 1.0852 | 0.4051 | 620 | 1.1369 | | 1.1071 | 0.4254 | 651 | 1.1370 | | 1.1182 | 0.4456 | 682 | 1.1350 | | 1.102 | 0.4659 | 713 | 1.1343 | | 1.104 | 0.4861 | 744 | 1.1336 | | 1.0855 | 0.5064 | 775 | 1.1333 | | 1.083 | 0.5266 | 806 | 1.1305 | | 1.0745 | 0.5469 | 837 | 1.1311 | | 1.0763 | 0.5671 | 868 | 1.1295 | | 1.0901 | 0.5874 | 899 | 1.1296 | | 1.1007 | 0.6076 | 930 | 1.1293 | | 1.0832 | 0.6279 | 961 | 1.1286 | | 1.0931 | 0.6482 | 992 | 1.1261 | | 1.0848 | 0.6684 | 1023 | 1.1264 | | 1.1041 | 0.6887 | 1054 | 1.1263 | | 1.0906 | 0.7089 | 1085 | 1.1244 | | 1.0847 | 0.7292 | 1116 | 1.1257 | | 1.0761 | 0.7494 | 1147 | 1.1249 | | 1.0949 | 0.7697 | 1178 | 1.1243 | | 1.0956 | 0.7899 | 1209 | 1.1240 | | 1.0814 | 0.8102 | 1240 | 1.1240 | | 1.0919 | 0.8304 | 1271 | 1.1242 | | 1.0858 | 0.8507 | 1302 | 1.1240 | | 1.0784 | 0.8710 | 1333 | 1.1238 | | 1.0816 | 0.8912 | 1364 | 1.1236 | | 1.0918 | 0.9115 | 1395 | 1.1233 | | 1.1 | 0.9317 | 1426 | 1.1235 | | 1.0551 | 0.9520 | 1457 | 1.1234 | | 1.0643 | 0.9722 | 1488 | 1.1235 | | 1.0921 | 0.9925 | 1519 | 1.1235 | ### Framework versions - PEFT 0.7.1 - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.16.1 - Tokenizers 0.19.1
mejdi86/dqn-SpaceInvadersNoFrameskip-v4
mejdi86
2024-05-24T15:19:00Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-05-24T15:18:28Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 608.00 +/- 104.58 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mejdi86 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mejdi86 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mejdi86 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
roofdancer/thesis-bart-finetuned-on-transformed-to-story
roofdancer
2024-05-24T15:18:03Z
117
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:sshleifer/distilbart-cnn-6-6", "base_model:finetune:sshleifer/distilbart-cnn-6-6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-05-24T12:27:10Z
--- license: apache-2.0 base_model: sshleifer/distilbart-cnn-6-6 tags: - generated_from_trainer metrics: - rouge model-index: - name: thesis-bart-finetuned-on-transformed-to-story results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # thesis-bart-finetuned-on-transformed-to-story This model is a fine-tuned version of [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9209 - Rouge1: 38.2494 - Rouge2: 12.1445 - Rougel: 20.0146 - Rougelsum: 34.2218 - Gen Len: 140.3552 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | 3.0361 | 1.0 | 2811 | 2.9209 | 38.2494 | 12.1445 | 20.0146 | 34.2218 | 140.3552 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf
RichardErkhov
2024-05-24T15:16:33Z
47
0
null
[ "gguf", "arxiv:2312.15166", "endpoints_compatible", "region:us" ]
null
2024-05-24T11:52:40Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) frankencria-llama2-11b-v1.3-m.1 - GGUF - Model creator: https://huggingface.co/davzoku/ - Original model: https://huggingface.co/davzoku/frankencria-llama2-11b-v1.3-m.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [frankencria-llama2-11b-v1.3-m.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q2_K.gguf) | Q2_K | 3.47GB | | [frankencria-llama2-11b-v1.3-m.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.IQ3_XS.gguf) | IQ3_XS | 3.83GB | | [frankencria-llama2-11b-v1.3-m.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.IQ3_S.gguf) | IQ3_S | 4.04GB | | [frankencria-llama2-11b-v1.3-m.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q3_K_S.gguf) | Q3_K_S | 4.04GB | | [frankencria-llama2-11b-v1.3-m.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.IQ3_M.gguf) | IQ3_M | 4.27GB | | [frankencria-llama2-11b-v1.3-m.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q3_K.gguf) | Q3_K | 4.53GB | | [frankencria-llama2-11b-v1.3-m.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q3_K_M.gguf) | Q3_K_M | 4.53GB | | [frankencria-llama2-11b-v1.3-m.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q3_K_L.gguf) | Q3_K_L | 4.95GB | | [frankencria-llama2-11b-v1.3-m.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.IQ4_XS.gguf) | IQ4_XS | 5.01GB | | [frankencria-llama2-11b-v1.3-m.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q4_0.gguf) | Q4_0 | 5.26GB | | [frankencria-llama2-11b-v1.3-m.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.IQ4_NL.gguf) | IQ4_NL | 5.29GB | | [frankencria-llama2-11b-v1.3-m.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q4_K_S.gguf) | Q4_K_S | 5.3GB | | [frankencria-llama2-11b-v1.3-m.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q4_K.gguf) | Q4_K | 5.62GB | | [frankencria-llama2-11b-v1.3-m.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q4_K_M.gguf) | Q4_K_M | 5.62GB | | [frankencria-llama2-11b-v1.3-m.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q4_1.gguf) | Q4_1 | 5.83GB | | [frankencria-llama2-11b-v1.3-m.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q5_0.gguf) | Q5_0 | 6.41GB | | [frankencria-llama2-11b-v1.3-m.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q5_K_S.gguf) | Q5_K_S | 6.41GB | | [frankencria-llama2-11b-v1.3-m.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q5_K.gguf) | Q5_K | 6.59GB | | [frankencria-llama2-11b-v1.3-m.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q5_K_M.gguf) | Q5_K_M | 6.59GB | | [frankencria-llama2-11b-v1.3-m.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q5_1.gguf) | Q5_1 | 6.98GB | | [frankencria-llama2-11b-v1.3-m.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q6_K.gguf) | Q6_K | 7.62GB | | [frankencria-llama2-11b-v1.3-m.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf/blob/main/frankencria-llama2-11b-v1.3-m.1.Q8_0.gguf) | Q8_0 | 9.87GB | Original model description: --- inference: false language: en license: llama2 model_type: llama datasets: - mlabonne/CodeLlama-2-20k pipeline_tag: text-generation base_model: - davzoku/cria-llama2-7b-v1.3 library_name: transformers tags: - mergekit - merge - llama-2 --- # FrankenCRIA v1.3-m.1 ## What is FrankenCRIA? <p align="center"> <img src="https://github.com/davzoku/cria/blob/main/assets/frankencria-icon-512x512.png?raw=true" width="300" height="300" alt="FrankenCRIA Logo"> <br> <i>This is a frankenmerge of <a href="https://huggingface.co/davzoku/cria-llama2-7b-v1.3">davzoku/cria-llama2-7b-v1.3</a>.</i> </p> The configuration is the same as [Undi95/Mistral-11B-v0.1](https://huggingface.co/Undi95/Mistral-11B-v0.1), [mlabonne/FrankenBeagle14-11B](https://huggingface.co/mlabonne/FrankenBeagle14-11B) and the DUS technique used in [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0). Please be aware that this model is highly experimental, and no further training has been conducted following the merge. Therefore, the model performance may not meet expectations, as described in the [SOLAR paper](https://arxiv.org/abs/2312.15166) ## 📦 FrankenCRIA Model Release FrankenCRIA v1.3 comes with several variants. - [davzoku/frankencria-llama2-11b-v1.3-m.1](https://huggingface.co/davzoku/frankencria-llama2-11b-v1.3-m.1): 11B FrankenMerge inspired by [Undi95/Mistral-11B-v0.1](https://huggingface.co/Undi95/Mistral-11B-v0.1) - [davzoku/frankencria-llama2-11b-v1.3-m.2](https://huggingface.co/davzoku/frankencria-llama2-12.5b-v1.3-m.2): 12.5B interleaving FrankenMerge inspired by [vilm/vinallama-12.5b-chat-DUS](https://huggingface.co/vilm/vinallama-12.5b-chat-DUS) ## 🧩 Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [davzoku/cria-llama2-7b-v1.3](https://huggingface.co/davzoku/cria-llama2-7b-v1.3) ### Configuration The following YAML configuration was used to produce this model. ```yaml # https://huggingface.co/Undi95/Mistral-11B-v0.1 slices: - sources: - model: davzoku/cria-llama2-7b-v1.3 layer_range: [0, 24] - sources: - model: davzoku/cria-llama2-7b-v1.3 layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ```
LiteLLMs/falcon-11B-GGUF
LiteLLMs
2024-05-24T15:15:21Z
30
0
null
[ "gguf", "GGUF", "en", "de", "es", "fr", "it", "nl", "pl", "pt", "ro", "cs", "dataset:tiiuae/falcon-refinedweb", "arxiv:2311.16867", "license:unknown", "region:us", "conversational" ]
null
2024-05-24T15:00:57Z
--- language: - en - de - es - fr - it - nl - pl - pt - ro - cs license: unknown tags: - GGUF datasets: - tiiuae/falcon-refinedweb inference: false quantized_by: andrijdavid --- # falcon-11B-GGUF - Original model: [falcon-11B](https://huggingface.co/tiiuae/falcon-11B) <!-- description start --> ## Description This repo contains GGUF format model files for [falcon-11B](https://huggingface.co/tiiuae/falcon-11B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: LiteLLMs/falcon-11B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download LiteLLMs/falcon-11B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download LiteLLMs/falcon-11B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install huggingface_hub[hf_transfer] ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/falcon-11B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: falcon-11B # 🚀 Falcon2-11B **Falcon2-11B is an 11B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. The model is made available under the [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html), the permissive Apache 2.0-based software license which includes an [acceptable use policy](https://falconllm-staging.tii.ae/falcon-2-acceptable-use-policy.html) that promotes the responsible use of AI.** *Paper coming soon 😊.* 🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost from HF](https://huggingface.co/blog/falcon)! ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-11B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, ) sequences = pipeline( "Can you explain the concepts of Quantum Computing?", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` 💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!** For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon). # Model Card for Falcon2-11B ## Model Details ### Model Description - **Developed by:** [https://www.tii.ae](https://www.tii.ae) - **Model type:** Causal decoder-only - **Language(s) (NLP):** English, German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish - **License:** [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html) ### Model Source - **Paper:** *coming soon*. ## Uses ### Direct Use Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.) ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations Falcon2-11B is trained mostly on English, but also German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of Falcon2-11B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use. ## How to Get Started with the Model ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "tiiuae/falcon-11B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, device_map="auto", ) sequences = pipeline( "Can you explain the concepts of Quantum Computing?", max_length=200, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Falcon2-11B was trained over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. It followed a four stage training strategy. The first three stages were focused on increasing the context length, from to 2048 to 4096 and finally to 8192 tokens. The last stage aimed to further enhance performance using only high quality data. Overall, the data sources included RefinedWeb-English, Refined Web-Europe (cs, de, es, fr, it, nl, pl, pt, ro, sv), high quality technical data, code data, and conversational data extracted from public sources. The training stages were as follows: | **Stage** | **Context length** | **Tokens** | | - | | Stage 1 | 2048 | 4500 B | | Stage 2 | 4096 | 250 B | | Stage 3 | 8192 | 250 B | | Stage 4 | 8192 | 500 B | The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer. ### Training Procedure Falcon2-11B was trained on 1024 A100 40GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=8, PP=1, DP=128) combined with ZeRO and Flash-Attention 2. #### Training Hyperparameters | **Hyperparameter** | **Value** | **Comment** | | | | | --- | | Layers | 60 | | | `d_model` | 4096 | | | `head_dim` | 128 | | | Vocabulary | 65024 | | | Sequence length | 8192 | During stages 3 and 4 | ### Compute Infrastructure #### Hardware Falcon2-11B was trained on AWS SageMaker, using on average 1024 A100 40GB GPUs in 128 p4d instances. #### Software Falcon2-11B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels and FlashAttention-2. More details about the distributed training strategy can be found in [Almazrouei et.al](https://arxiv.org/abs/2311.16867). ## Citation *Paper coming soon* 😊. ## License Falcon2-11B is licenced under [TII Falcon License 2.0](https://falconllm-staging.tii.ae/falcon-2-terms-and-conditions.html), the permissive Apache 2.0-based software license which includes an [acceptable use policy](https://falconllm-staging.tii.ae/falcon-2-acceptable-use-policy.html) that promotes the responsible use of AI. ## Contact [email protected] <!-- original-model-card end -->
RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf
RichardErkhov
2024-05-24T15:14:02Z
13
0
null
[ "gguf", "endpoints_compatible", "region:us" ]
null
2024-05-24T11:52:39Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Solstice-11B-v1 - GGUF - Model creator: https://huggingface.co/Sao10K/ - Original model: https://huggingface.co/Sao10K/Solstice-11B-v1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Solstice-11B-v1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q2_K.gguf) | Q2_K | 3.73GB | | [Solstice-11B-v1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.IQ3_XS.gguf) | IQ3_XS | 4.14GB | | [Solstice-11B-v1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.IQ3_S.gguf) | IQ3_S | 4.37GB | | [Solstice-11B-v1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q3_K_S.gguf) | Q3_K_S | 4.34GB | | [Solstice-11B-v1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.IQ3_M.gguf) | IQ3_M | 4.51GB | | [Solstice-11B-v1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q3_K.gguf) | Q3_K | 4.84GB | | [Solstice-11B-v1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q3_K_M.gguf) | Q3_K_M | 4.84GB | | [Solstice-11B-v1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q3_K_L.gguf) | Q3_K_L | 5.26GB | | [Solstice-11B-v1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.IQ4_XS.gguf) | IQ4_XS | 5.43GB | | [Solstice-11B-v1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q4_0.gguf) | Q4_0 | 5.66GB | | [Solstice-11B-v1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.IQ4_NL.gguf) | IQ4_NL | 5.72GB | | [Solstice-11B-v1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q4_K_S.gguf) | Q4_K_S | 5.7GB | | [Solstice-11B-v1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q4_K.gguf) | Q4_K | 6.02GB | | [Solstice-11B-v1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q4_K_M.gguf) | Q4_K_M | 6.02GB | | [Solstice-11B-v1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q4_1.gguf) | Q4_1 | 6.27GB | | [Solstice-11B-v1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q5_0.gguf) | Q5_0 | 6.89GB | | [Solstice-11B-v1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q5_K_S.gguf) | Q5_K_S | 6.89GB | | [Solstice-11B-v1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q5_K.gguf) | Q5_K | 7.08GB | | [Solstice-11B-v1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q5_K_M.gguf) | Q5_K_M | 7.08GB | | [Solstice-11B-v1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q5_1.gguf) | Q5_1 | 7.51GB | | [Solstice-11B-v1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q6_K.gguf) | Q6_K | 8.2GB | | [Solstice-11B-v1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Sao10K_-_Solstice-11B-v1-gguf/blob/main/Solstice-11B-v1.Q8_0.gguf) | Q8_0 | 10.62GB | Original model description: --- license: cc-by-nc-4.0 language: - en datasets: - Himitsui/Lewd-Assistant-v1 --- ===== Solstice-11B-v1 ===== A model trained with the sole goal of NSFW. That is it. Results are to be as expected. Finetuned off several instruct datasets that are NSFW. Example subset Below, other instruct datasets modified are private for now. Outputs were manually verified by me and two good friends. [Lewd-Assistant-v1](https://huggingface.co/datasets/Himitsui/Lewd-Assistant-v1) ---> Used a combination of Claude 2.0, GPT-4-Turbo and WinterGoddess-1.4x to reformat instead of a small 10B model [Fimbulvetr-v1] like the example dataset shown, which resulted in less errors and better answers. Private for now. Ruled Out names: <br>Solarslut <br>Solascivious <br>Sultry <br>Sundress <br>Scorch *** Prompt Format: Alpaca There are several issues with the model, but this is an experimental one so :shrug: <br>----> May speak as {{user}} sometimes. I know what causes it. I kinda like it though that way. <br>----> May ramble on or give small outputs. Sampler settings. <br>----> May be a little inconsistent at times. Yeah its inevitable due to the nature of the data. <br>----> Steers towards NSFW --> As Expected. *** GGUF: https://huggingface.co/Sao10K/Solstice-11B-v1-GGUF exl2: https://huggingface.co/models?search=LoneStriker%20Solstice-11B-v1 thanks to lonestriker for the quants
azharumo/llama3-ml-ext
azharumo
2024-05-24T15:12:52Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-24T15:12:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Fawazzx/Saul.v1
Fawazzx
2024-05-24T15:11:51Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-24T15:11:45Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Aratako/Ninja-v1-RP-expressive
Aratako
2024-05-24T15:11:43Z
30
10
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "roleplay", "merge", "mergekit", "ja", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:Aratako/Ninja-v1-RP", "base_model:merge:Aratako/Ninja-v1-RP", "base_model:Elizezen/Antler-7B", "base_model:merge:Elizezen/Antler-7B", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-05-21T12:11:37Z
--- license: cc-by-nc-4.0 language: - ja library_name: transformers tags: - roleplay - merge - mergekit base_model: [Aratako/Ninja-v1-RP, Elizezen/Antler-7B] --- # Ninja-v1-RP-expressive [GGUF版はこちら/Click here for the GGUF version](https://huggingface.co/Aratako/Ninja-v1-RP-expressive-GGUF) ## 概要 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ロールプレイ用モデルである[Aratako/Ninja-v1-RP](https://huggingface.co/Aratako/Ninja-v1-RP)をベースに、小説生成モデルである[Elizezen/Antler-7B](https://huggingface.co/Elizezen/Antler-7B)の派生モデルをマージすることで表現力を強化したロールプレイ用モデルです。 ## プロンプトフォーマット Vicunaのchat templateを利用してください。また、設定などを渡すシステムプロンプトは最初の`USER: `より前に入力されることを想定しています。また、マルチターンの対話を行う場合各ターンのアシスタントの応答の末尾に`eos_token`(`</s>`)を必ずつけてください。 ``` {ロールプレイの指示、世界観・あらすじの説明、キャラの設定など} USER: {userの最初の入力} ASSISTANT: ``` 実プロンプト例(1ターン目) ``` 今からロールプレイを行いましょう。"桜"というキャラとしてロールプレイしてください。会話相手は"悠人"という人物です。人物の設定を以下に示します。 あなたがなりきる"桜"というキャラクターの設定は以下の通りです。 名前:桜 年齢:24歳 職業:悠人に仕えるメイド 容姿:黒髪黒目、ロングヘアー、スリムな体型。 口調:丁寧語を使う。一人称は「私」で、主人である悠人のことは「ご主人様」と呼ぶ。 性格:母性が強く、甘えられるのが好き。料理や家事が得意で家庭的。可愛いものが好き。ご主人様を尊敬しており、彼の幸せを第一に考える。 過去の出来事:悠人を支えるために、彼の家に仕えることを決めた。 また、あなたが会話する相手である"悠人"という人物の設定は以下の通りです。 名前:悠人 年齢:20歳 職業:貴族、桜の主人 容姿:黒髪黒目、背は高め 性格:かなりの甘え上手。桜が大好き。 それでは、上記の設定をもとにして"桜"として会話してください。 回答の中では、"桜"のセリフや心情の描写を含めてください。 USER: 悠人「おはよう!」(リビングに降りてきた悠人は桜に元気よくあいさつする) ASSISTANT: ``` 出力 ``` 桜「おはようございます、ご主人様。今朝は早いんですね?」(優しい笑みで悠人を迎え入れる桜) ``` 実プロンプト例(2ターン目) ``` 今からロールプレイを行いましょう。"桜"というキャラとしてロールプレイしてください。会話相手は"悠人"という人物です。人物の設定を以下に示します。 あなたがなりきる"桜"というキャラクターの設定は以下の通りです。 名前:桜 年齢:24歳 職業:悠人に仕えるメイド 容姿:黒髪黒目、ロングヘアー、スリムな体型。 口調:丁寧語を使う。一人称は「私」で、主人である悠人のことは「ご主人様」と呼ぶ。 性格:母性が強く、甘えられるのが好き。料理や家事が得意で家庭的。可愛いものが好き。ご主人様を尊敬しており、彼の幸せを第一に考える。 過去の出来事:悠人を支えるために、彼の家に仕えることを決めた。 また、あなたが会話する相手である"悠人"という人物の設定は以下の通りです。 名前:悠人 年齢:20歳 職業:貴族、桜の主人 容姿:黒髪黒目、背は高め 性格:かなりの甘え上手。桜が大好き。 それでは、上記の設定をもとにして"桜"として会話してください。 回答の中では、"桜"のセリフや心情の描写を含めてください。 USER: 悠人「おはよう!」(リビングに降りてきた悠人は桜に元気よくあいさつする) ASSISTANT: 桜「おはようございます、ご主人様。今朝は早いんですね?」(優しい笑みで悠人を迎え入れる桜)</s> USER: 悠人「うん、昨日は早めに寝たから、朝から元気だよ!」 ASSISTANT: ``` なお、マージによりロールプレイへの理解力がやや薄れているからか時折ユーザー側のセリフを続けたり小説のようなものを書きだすことがあります。これについてはfew-shot promptingや再生成などで対応してください ## マージの詳細 [Elizezen/Antler-7B](https://huggingface.co/Elizezen/Antler-7B)に対し、まず[mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)のChat Vectorを加算した後、以下4つの英語RP用モデルのTask Vectorを0.8倍して加算し、4種類のTask Vector加算モデルを作成しました。 - [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) - [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) - [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) - [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B) 各モデルのTask Vectorの加算の式は以下の通りです。 ``` new_model = Antler-7B + 0.8 * (Mistral-7B-Instruct-v0.2 - Mistral-7B-v0.1) + 0.8 * (target_model - Mistral-7B-v0.1) ``` 次に、このTask Vector加算によってできた4モデルとMistral-7B-Instruct-v0.2の差分加算モデルを、Model Stockという手法を用い以下のようなconfigを使ってmergekitでマージし、Antler-7Bベースのモデルを作成しました。 ```yaml models: - model: ./Antler-7B-CV - model: ./Antler-7B-CV-Kunoichi - model: ./Antler-7B-CV-SiliconMaid - model: ./Antler-7B-CV-WestLake - model: ./Antler-7B-CV-LoyalMacaroniMaid merge_method: model_stock base_model: ./Antler-7B-CV dtype: bfloat16 tokenizer_source: union ``` 最後に、[Aratako/Ninja-v1-RP](https://huggingface.co/Aratako/Ninja-v1-RP)に上記操作で出来上がったモデルを[DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) によってマージし、本モデルを作成しました。 ```yaml models: - model: Aratako/Ninja-v1-RP # no parameters necessary for base model - model: ./Antler-7B-MS # improve expression quality parameters: density: 0.7 weight: 0.5 merge_method: dare_ties base_model: Aratako/Ninja-v1-RP dtype: bfloat16 tokenizer_source: union ```